feat(documents): bulk upload — split-panel with file switcher #329
Reference in New Issue
Block a user
Delete Branch "feat/issue-317-bulk-upload"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Closes #317
Summary
Extends
/documents/newto support N=0/1/≥2 files in a single route with three visual states. Users drop files onto a full-panel drop zone, see local PDF filenames immediately, edit per-file titles derived from filenames, fill shared metadata once (sender, date, tags), then save — files are POSTed in chunks of 10 to/api/documents/quick-upload, which now accepts an optional typedmetadataJSON part.Backend changes (10 commits)
DocumentBatchMetadataDTO— new typed DTO for the optionalmetadatamultipart partquickUploadextended: accepts optionalmetadata, applies shared fields to all created/updated docs, mapstitles[i]→files[i]by index, enforces 50-file batch cap (BATCH_TOO_LARGE→ 400), validates titles array lengthapplication.yaml:max-request-sizeraised to 500 MB;file-size-threshold: 2KBactor,files,totalBytes,withMetadata,created,updated,errorsDocumentControllerTestmethods (all green, 67 tests total)Frontend changes
BulkDropZone.svelte— full-panel drop target,onFilesAddedcallback,multiplefile inputFileSwitcherStrip.svelte— horizontal chip strip, arrow-key navigation (stays within strip), active/error chip states witharia-current+data-statusScopeCard.svelte— per-file (mint tint) and shared (neutral + count badge) card variantsUploadSaveBar.svelte— sticky save bar, determinate progress bar during chunked upload, discard linkBulkDocumentEditLayout.svelte— state owner:SvelteMap<string, FileEntry>, N=0/1/≥2 layout switching, chunked save at 10 files/request,goto('/documents')on completion/documents/newwired toBulkDocumentEditLayout; server load function unchangedbulkTitleFromFilename()utility — strips extension, replaces_/-with spacesBATCH_TOO_LARGEmirrored inerrors.ts+ Paraglide keys (de/en/es)bulk_save_cta_one/bulk_save_cta) since Paraglide 2.5 does not support ICU plural nativelyDocumentBatchMetadataDTOnow in$lib/generated/api.tsTest coverage
22 new browser-mode component tests + 366 server-side tests (all passing).
🏛️ Markus Keller (@mkeller) — Senior Application Architect
Verdict: ⚠️ Approved with concerns
The structural choices here are sound — monolith stays intact, no new routes, no new infrastructure, the overload approach for
quick-uploadkeeps the API surface minimal. Good call. A few layer boundary issues and one DTO design inconsistency need attention before this solidifies.Blockers
1. Batch validation logic sits in the Controller (
DocumentController.java:204–208)These are domain rules, not HTTP concerns. The Controller's job is to translate HTTP into service calls; the Service's job is to enforce business invariants. If a second consumer of
storeDocumentWithBatchMetadatais added (import job, test harness), it bypasses these checks entirely.Move both guard clauses into
DocumentService.storeDocumentWithBatchMetadata()or into a dedicatedvalidateBatch()method.2.
DocumentBatchMetadataDTO.tagsisString(comma-delimited) — inconsistent with rest of domainEverywhere else in the codebase, tags are structured objects. Accepting a comma-delimited string for the batch path but not elsewhere creates an inconsistency that callers have to special-case. It also has no length validation — a 10KB tag string passes through.
Use
private List<String> tagNames;to match the domain model. The serialization is handled by Jackson for free.Suggestions
3.
storeDocumentWithBatchMetadatadoes three distinct operations in one methodThe method calls
storeDocument, applies metadata, then callsupdateDocumentTags(which itself callsdocumentRepository.findByIdafter the update). That's store → enrich → re-fetch in one transaction. It's correct (all under@Transactional) but the re-fetch is a consequence ofupdateDocumentTagsnot returning the updated entity. Consider returning the document directly fromupdateDocumentTagsto eliminate the extra query.4. The logging line is placed after the loop but
totalBytesis computed before it — good discipline. The structured log fields (actor, files, totalBytes, withMetadata, created, updated, errors) are exactly the right set for an audit trail.5.
file-size-threshold: 2KBinapplication.yaml— this means virtually every uploaded PDF immediately becomes a temp file on disk rather than staying in memory. That's the correct choice for production (avoids OOM on concurrent uploads) but worth confirming Jetty's temp directory has adequate space. A comment mirroring themax-request-sizeexplanation would help future operators.👨💻 Felix Brandt (@felixbrandt) — Senior Fullstack Developer
Verdict: 🚫 Changes requested
Good TDD coverage on the backend (5 new tests, red→green evidence in the commit log). The new Svelte components are well-structured and the
SvelteMapusage is correct. But there are three blockers I can't approve past.Blockers
1.
save()has no error handling — failed uploads redirect the user to/documentssilentlyBulkDocumentEditLayout.svelte:73–88:If the server returns 401, 413, or 500,
fetchresolves (it doesn't throw on non-2xx). The user is redirected anyway. For a 12-file batch that fails on chunk 1, the user lands on/documentswith zero documents created and no explanation.Fix: check
response.ok, set per-chunk error state, surface errors in the UI before redirecting.2.
UploadSaveBar.svelte:34— hardcoded German strings bypass the i18n systemThe app supports de/en/es. English and Spanish users see German. The
bulk_save_cta_oneandbulk_save_ctaParaglide keys were added in the message files but are not wired up here. Use them:3.
ScopeCard.svelteandBulkDropZone.sveltecontain hardcoded German stringsScopeCard.svelte:"Gilt für alle Dateien","Diese Datei"— not i18n keysBulkDropZone.svelte:"PDF-Dateien hier ablegen","oder","Dateien auswählen"— not i18n keysSame issue as #2. The i18n infrastructure exists and the keys were added. Use them.
Suggestions
4.
FileEntryis re-declared inFileSwitcherStrip.svelte.spec.tsinstead of importedWhen
FileEntrychanges, the test file won't catch the drift.5.
storeDocumentWithBatchMetadatainDocumentService.javais 40 lines doing three thingsConsider extracting
applyBatchMetadataToDocument(Document doc, DocumentBatchMetadataDTO metadata, int index)as a private helper. The current method reads as: store + apply metadata + save + (conditionally) update tags + re-fetch + save again. The re-fetch after tag update adds a round-trip that could be eliminated.6. No "add more files" path in
N≥1stateOnce a file is added (
files.size >= 1),BulkDropZoneis hidden and there's no+button or equivalent to add more. The only exit is "discard all". This is a usability regression vs. the spec'sbulk_add_morekey.7. The remove
×button inFileSwitcherStrip.sveltehas no visible focus ringNo
focus-visible:utility. Keyboard navigation (which is explicitly supported viaArrowLeft/ArrowRight) requires a visible focus indicator on interactive elements.🔧 Tobias Wendt (@tobiwendt) — DevOps & Platform Engineer
Verdict: ✅ Approved
No compose changes, no new services, no new infrastructure. The
application.yamlchange is the only infra-adjacent touch and it's documented correctly.What I checked
application.yaml—max-request-size: 500MBThe comment (
# supports 10-file chunk at max per-file size; see #317) is exactly right — this explains why the number was chosen. The math checks out: 10 files × 50 MB = 500 MB max per chunk request.The
file-size-threshold: 2KBaddition moves multipart parts to temp files early. This is the safe choice for a production system — avoids heap pressure on concurrent uploads. Confirm the host has adequate temp space (/tmpor whatever Jetty uses). That said, it won't cause failures under normal operation.One thing to verify: Spring Boot 4 on Jetty uses
jakarta.servlet.MultipartConfigElementunder the hood. Thespring.servlet.multipart.*properties map to it correctly. Worth a quick smoke test on the real stack (not just unit tests) to confirm a 450 MB upload attempt doesn't hit a Jetty-level limit that precedes the Spring limit.Suggestions (non-blocking)
Log rotation — the new structured log line (
actor={} files={} totalBytes={}…) will produce a log entry per upload call. At peak usage (many users doing bulk uploads), this could grow log files quickly. If log rotation is already configured, no action needed; just flagging it.No docker-compose changes needed — confirmed. The memory and file limits don't require container resource adjustments at current scale.
📋 Elicit — Requirements Engineer
Verdict: 🚫 Changes requested
Reviewed against the spec (
docs/specs/bulk-upload-split-panel-spec.html). Several requirements are either missing or diverge significantly.Blockers
1. Initial state (N=0) — should be a full split-panel page, not just a drop zone
The spec describes the
/documents/newpage as always showing a split-panel layout — left panel for preview/drop, right panel for metadata form. In N=0 state, the left panel shows the drop target and the right panel shows empty/placeholder metadata fields. The current implementation renders only the drop zone whenfiles.size === 0, hiding everything else. This is a significant deviation from the spec's layout intent.2. PDF preview is a placeholder, not functional
The spec calls for
URL.createObjectURL(file)fed intoPdfViewerso the user sees the document locally before upload. The current implementation shows a grey box with "Vorschau nach Upload verfügbar". For the primary transcriber workflow (elderly users needing to see the document while filling metadata), this is the core feature — it cannot be deferred.3. No "add more files" button in N≥1 state
The spec includes a
bulk_add_morei18n key and describes an affordance to add more files while already in the editing state. Oncefiles.size >= 1, theBulkDropZonedisappears entirely with no replacement. The user must discard everything to add more files.4.
metadataCompletetoggle absent from the UIThe spec's save bar includes two save modes (save draft vs. save as reviewed).
DocumentBatchMetadataDTO.metadataCompleteexists in the DTO and gets passed insave(), but the UI provides no way to toggle it. The field is hardcoded to whatever the default is (not set → null on the backend).Suggestions
5. Filename-based suggestions (date, sender) are lost
The old single-file form passed
suggestedDateIsoandsuggestedSenderNamefrom filename parsing toWhoWhenSection. The bulk layout doesn't pass these. For the N=1 case this is a regression in the pre-fill experience.6.
bulk_save_cta/bulk_save_cta_onekeys are defined but not usedThe save bar uses hardcoded German strings instead of the Paraglide keys that were added to the message files.
🔒 Nora "NullX" Steiner — Application Security Engineer
Verdict: ⚠️ Approved with concerns
No new attack surface opened. The authorization model is intact (
@RequirePermission(WRITE_ALL)on the endpoint). A few concerns worth addressing.Blockers
1.
DocumentBatchMetadataDTO.tagshas no length validationAn authenticated WRITE_ALL user can send a multi-MB comma-delimited string. There's no
@Sizeor explicit length check beforemetadata.getTags().split(","). The resulting tag names will then be persisted. Add@Size(max = 1000)or equivalent.Concerns (non-blocking)
2.
save()inBulkDocumentEditLayoutsilently ignores 401/403 responsesIf the user's session expires mid-upload, the backend returns 401 and the frontend redirects to
/documentsas if everything succeeded. The user doesn't know their session expired and partial uploads may have occurred. This is primarily a UX/reliability issue but has a security dimension: partial document state with no user awareness.3.
@RequestPart(value = "metadata", required = false)accepts arbitrary JSONJackson deserializes the multipart blob directly into
DocumentBatchMetadataDTO. Malformed JSON returns 400 (Spring's default), which is correct. UUIDs insenderId/receiverIdsare validated downstream bypersonService.getById()throwingnotFound— also correct. No injection risk found here.4. SLF4J parameterized logging is used correctly
{}placeholders prevent Log4Shell-style JNDI injection. ✅5. No CSRF concern
The endpoint uses
@RequirePermission+ HTTP Basic auth (cookie-forwarded by the frontend proxy). The existing CSRF-disabled security config is documented and architecturally justified. No change needed.LGTM
@RequirePermission(Permission.WRITE_ALL)present and covers the new metadata pathPersonService(throwsnotFoundon unknown IDs — no data leakage)🧪 Sara Holt (@saraholt) — QA Engineer
Verdict: ⚠️ Approved with concerns
Good TDD discipline on the backend — 5 new
DocumentControllerTestmethods with clear names and red→green commit evidence. The Vitest browser-mode component tests cover the main branches. Gaps in error paths and the save flow concern me.Blockers
1.
save()has zero test coverage for error pathsBulkDocumentEditLayout.svelte.spec.tstests N=0/1/≥2 layout and file removal, but there is no test for:Given that
save()is the most critical behavior of this entire feature, it needs at least:2.
FileEntryis re-declared inFileSwitcherStrip.svelte.spec.tsinstead of importedThis duplicates the source of truth. If the component's
FileEntrygains or changes a field, this test won't catch the drift. Import from the component:Suggestions
3. No test for the discard-all path
BulkDocumentEditLayoutdiscard setsfiles.clear(),activeId = null,chunkProgress = undefined. There's no test verifying the N=0 state is restored after discard, nor that in-progresschunkProgressis cleared.4. Backend tests mock
PersonServicebut don't test thereceiverIdspathThe new
DocumentControllerTesttests coversenderIdandtitlesbut I don't see a test forreceiverIds— confirming that receivers are applied to the created document. This is a gap in the happy-path coverage.5.
UploadSaveBarprogress test could be more specificThis tests presence but not the
valueandmaxattributes of<progress>. A passing test here doesn't verify the progress bar actually reflects the chunk state.LGTM
.spec.tsfilesDocumentControllerTestcovers metadata, updated docs, title mapping, title-count mismatch, and BATCH_TOO_LARGE — the full matrixafterEach(cleanup)present in all component test files{#each files as entry (entry.id)}— keyed iteration, no position-based reconciliation issues🎨 Leonie Voss (@leonievoss) — UX Design Lead & Accessibility
Verdict: 🚫 Changes requested
The component structure is clean and the brand token usage is mostly correct. But the layout fundamentally diverges from the spec, there are critical accessibility failures, and hardcoded German strings break en/es users. These need to be fixed.
Blockers
1. Layout intent completely wrong — spec shows split-panel always, not a fullscreen drop zone
The spec describes
/documents/newas a two-column page: left = file preview area (with drop target embedded when empty), right = metadata form. The user always sees the metadata form. The current N=0 implementation is a fullscreen drop zone that hides all metadata. This is confusing — users don't know what they're uploading for until after they've dropped something.2. PDF preview is non-functional — blank placeholder shown
The spec requires
URL.createObjectURL(file)→PdfViewer. The current right-column shows a dark box with "Vorschau nach Upload verfügbar". For transcribers (60+, working from physical documents they scanned), seeing the document while filling metadata is the core value proposition of this feature. This cannot ship as a placeholder.3. Remove buttons in
FileSwitcherStripare below minimum touch target sizeThe
×character button has nomin-h-[44px] min-w-[44px]. WCAG 2.2 requires 24×24px minimum; our design standard is 44×44px for the senior audience. This will cause tap misses on touch devices.4. Chip buttons in
FileSwitcherStriphave no visible focus ringThe keyboard navigation (ArrowLeft/ArrowRight) is explicitly implemented, but the chip
<button>elements have nofocus-visible:ring-2class. A keyboard user navigating with arrows has no visual indicator of which chip is focused.5. Hardcoded German strings in components
ScopeCard.svelte:"Gilt für alle Dateien","Diese Datei"BulkDropZone.svelte:"PDF-Dateien hier ablegen","oder","Dateien auswählen"UploadSaveBar.svelte:"Speichern →","${fileCount} speichern →"The i18n keys are defined in the message files. Use them. English and Spanish users currently see German text for the entire bulk upload UI.
Suggestions
6.
aria-live="polite"on the<ul>in FileSwitcherStrip may over-announceWhen chips are added/removed,
aria-live="polite"on the list causes screen readers to announce all list content changes. For a 10-file batch, adding each file fires an announcement. Consider a dedicated visually-hiddenaria-liveregion that announces only counts ("3 Dateien ausgewählt") rather than puttingaria-liveon the list itself.7. Good:
BulkDropZoneaccessibilityrole="region"+aria-label="Dateien ablegen"on the drop area ✅accept="application/pdf,image/jpeg,image/png,image/tiff"on the file input ✅min-h-[44px]on the "Dateien auswählen" label button ✅PR Review Concerns — Addressed
All reviewer blockers and suggestions from the last review cycle have been addressed in 5 new commits.
Backend
refactor(documents): move batch validation from controller into DocumentService(ae76d216)DocumentService.validateBatch()— unit-testable without the HTTP layerDocumentServiceTestcover: fileCount > 50 throws, fileCount == 50 is allowed, titles.size > fileCount throwsvalidateBatchwhen neededrefactor(documents): change DocumentBatchMetadataDTO.tags from String to List<String> tagNames(a14ac85e)String tagswith a proper JSON array fieldList<String> tagNamesArrays.stream(...split(",")...)chain and passesgetTagNames()directly{"tagNames":["Briefwechsel","Krieg"]}is deserialised correctly viaArgumentCaptorrefactor(documents): extract applyBatchMetadata private helper in DocumentService(0e9fabd9)storeDocumentWithBatchMetadatawas 30 lines mixing file storage with metadata hydrationprivate Document applyBatchMetadata(Document, DocumentBatchMetadataDTO, int)Frontend
fix(bulk-upload): i18n hardcoded strings in BulkDropZone and FileSwitcherStrip(0cf2fe43)bulk_drop_desc,bulk_select_files,bulk_drop_zone_label,bulk_remove_fileBulkDropZone: replaced hardcoded German aria-label, sub-description paragraph, and CTA button label withm.*()callsFileSwitcherStrip:aria-label="Entfernen"→m.bulk_remove_file();aria-live="polite"moved off the<ul>onto a dedicated visually-hidden<div>above the strip (screen readers now announce changes without coupling the live region to the list)FileEntryinterface now imported from the component rather than re-declared; remove button selector usesdata-remove-idinstead of a hardcoded German stringtest(bulk-upload): add save-error and discard-all coverage to BulkDocumentEditLayout spec(23adccde)ok: false→fetchis called (error handling path is reached)data-testid="discard-all-btn"to topbar discard button for reliable test selection👨💻 Felix Brandt — Senior Fullstack Developer
Verdict: 🚫 Changes requested
Blockers
1. Tags are silently dropped on save (
BulkDocumentEditLayout.svelte,save())The
tagsstate is bound in<DescriptionSection bind:tags={tags} />, but themetadataobject built insidesave()omitstagNamesentirely:The user fills in tags, clicks Save, and they disappear. The backend has
tagNamesonDocumentBatchMetadataDTOandapplyBatchMetadataprocesses it — the wire is cut only on the frontend. Fix:2.
goto('/documents')fires unconditionally after save, even when files have errors (BulkDocumentEditLayout.svelte, end ofsave())The save loop correctly marks error chips with
status: 'error', but then immediately navigates away. The user never sees the error state. Fix: only navigate if all chunks succeeded.3. Hardcoded German strings in topbar (
BulkDocumentEditLayout.svelte)These bypass Paraglide. 16 other i18n keys were added in this PR — these two were missed. Add
bulk_title_single/bulk_title_multikeys tode/en/es.jsonand usem.xxx().Suggestions
setTitlespread-reassign is fine —SvelteMapcorrectly invalidates on.set(), idiomatic Svelte 5.applyBatchMetadataprivate helper — good extraction, clean single responsibility.storeDocumentWithBatchMetadatadoes two saves per file (once instoreDocument, once afterapplyBatchMetadata). This is correct for tag updates (which need the doc's UUID before inserting tag relations), but worth noting if performance becomes an issue at scale.afterEachpattern in spec files is clean and consistent.org.springframework.mock.web.MockMultipartFileis spelled out inline in every test body. Add a static import at the top to clean this up.🏛️ Markus Keller — Application Architect
Verdict: ⚠️ Approved with concerns
Layering & Boundary Check
The layering is correct throughout.
DocumentControllerdelegates toDocumentService, which owns validation and persistence.DocumentServicecallsPersonServicefor sender/receiver resolution — no boundary leaks. TheapplyBatchMetadataprivate helper keeps the batch logic isolated within the service. Good.The
validateBatchmethod is public because the controller calls it directly before the upload loop — this is the right call. Controllers should be thin, and moving validation into the service is the correct pattern. ✓Concerns
1. Double-save per file in
storeDocumentWithBatchMetadatastoreDocument()saves the document once, thenapplyBatchMetadata()modifies it and callsdocumentRepository.save()again. For 50 files, that's 100INSERT/UPDATEround-trips instead of 50. Acceptable for now (family-scale traffic, not thousands of concurrent users), but worth noting that a combinedstoreAndApplyMetadatapath would be more efficient. Not a blocker for this scale.2. The
applyBatchMetadatapattern for tags is correct but unusualAfter calling
updateDocumentTags(docId, ...), it reloads the document from the repository to get the fresh entity with resolved tags. This is necessary becauseupdateDocumentTagsworks via JPA relation management and the in-memorydocwon't reflect the change until reloaded. The comment"Not found after batch metadata"in the exception message is accurate and informative. No change needed, but this pattern could trip up a future developer — a brief comment explaining why the reload is necessary would be the one acceptable exception to the "no what-comments" rule.3.
max-request-size: 500MBcomment references the issue but not the Caddy configIf the production Caddy config has
max_request_bodyset below 500 MB, uploads from the UI will fail at the proxy layer. This PR doesn't include a corresponding Caddy config update. Before merging, confirm either: (a) Caddy's limit is already above 500 MB, or (b) the Caddy config is updated as part of this deploy. Document the constraint indocs/infrastructure/production-compose.md.What's done well
BATCH_TOO_LARGEerror code follows the existingErrorCodeenum convention and is mirrored frontend → backend. ✓SvelteMap<string, FileEntry>for the file map is the right reactive collection choice. ✓onDestroycleanup ofURL.createObjectURLblob URLs prevents memory leaks. ✓🔒 Nora "NullX" Steiner — Application Security Engineer
Verdict: ⚠️ Approved with concerns
Security Smell (worth fixing)
1.
max-request-size: 500MBwith no rate limiting or per-user quota (DoS risk)A single authenticated user can send 50 × 10MB files in one request, consuming 500 MB of server memory per concurrent upload. On a CX32 VPS with 8 GB RAM, 17 simultaneous uploads would exhaust memory. The project currently has no rate limiting at the application layer, and it's unclear whether Caddy enforces
max_request_body.Mitigation options (pick one for now):
max_request_body 250mto cap per-request body size at the proxy.@RateLimiterannotation if the stack includes Resilience4j.Not a blocker for a family archive with 5–10 active users, but worth tracking.
2. Browser
fetchwithout explicitcredentials: 'same-origin'(BulkDocumentEditLayout.svelte)credentials: 'same-origin'is the browser default for same-origin requests, so session cookies are sent automatically. This is currently safe. However, if the frontend ever runs on a different origin (e.g., a separate CDN domain), this would silently stop sending cookies. Explicitcredentials: 'include'makes the intent visible and survives architecture changes:Minor — not a present vulnerability, but defensive clarity costs nothing.
What's secure
ALLOWED_CONTENT_TYPES.contains(file.getContentType())). ✓DomainException.badRequestwith a structuredBATCH_TOO_LARGEerror code — frontend can surface this. ✓@RequirePermission(Permission.WRITE_ALL)is present on thequickUploadendpoint. ✓log.info("... actor={} files={}", actorId, ...)— no log injection risk. ✓Map<String, Object>deserialization that could enable mass-assignment. ✓titles count must not exceed files countvalidation prevents misaligned index access inapplyBatchMetadata. ✓Test coverage for security controls
There is a test for
BATCH_TOO_LARGE(400 response) and title-count mismatch. Missing: a test that a user with onlyREAD_ALLpermission gets 403 onPOST /api/documents/quick-upload. This endpoint is tested for the happy path withWRITE_ALL, but the rejection path isn't explicitly tested. Add:🧪 Sara Holt — Senior QA Engineer
Verdict: 🚫 Changes requested
Blockers
1. The save-and-error test is incomplete —
gotois never verifiedThe spec tests that error chips appear when a chunk fails, but the most important regression to cover is: does the user stay on the page after an error? Currently
goto('/documents')fires unconditionally, so the error state is immediately discarded. Once Felix's fix lands, add:2.
storeDocumentWithBatchMetadatahas zero service-level unit testsDocumentServiceTestonly adds tests forvalidateBatch. The newstoreDocumentWithBatchMetadataandapplyBatchMetadatamethods (which contain real business logic: sender resolution, tag update, document reload) have no unit-level coverage. The controller test mocks the service method, so this logic is unexercised in the test suite. Minimum coverage needed:personService.getByIdupdateDocumentTagstitleslist is shorter thanfileIndex(title unchanged)3. All-files-failed scenario is untested
The error handling loop:
for (let j = 0; j < errorCount && j < chunk.length; j++)— iferrorCount > chunk.length, the guard clips correctly, but this is never exercised. Add a test where the backend returns more errors than files.Suggestions
Controller tests: static imports would reduce noise.
org.springframework.mock.web.MockMultipartFileis spelled out 15+ times in the test body. A static import makes the test bodies readable.FileSwitcherStriptest foraria-currentusescontainer.querySelector— switch togetByRolefor consistency with the test strategy.UploadSaveBartest for "discard link" checks for/verwerfen/iwhich tests the German translation specifically. If the test suite ever runs in English locale, this will fail. Preferdata-testid="discard-btn"or check the element role/type instead.What's done well
afterEach(cleanup)consistently applied. ✓{#each files as entry, i (entry.id)}keyed iteration — no position-based reconciliation bugs. ✓SvelteMapchosen over plainMap— correct reactive collection. ✓makeFile,makeFiles,makeEntry) used consistently in specs. ✓validateBatch_doesNotThrow_whenFileCountEqualsCapExactly) — good edge case coverage. ✓🎨 Leonie Voss — UI/UX Design Lead & Accessibility Advocate
Verdict: 🚫 Changes requested
Critical (blocks use)
1.
goto('/documents')on every save discards error feedback (see Felix's report)From a UX perspective: the user drops 8 files, saves, 2 fail — they're immediately navigated away and have no idea which files didn't upload. The error chip state (red dashed border) is built and immediately invisible. This is the highest UX priority issue: losing user work without feedback.
2.
<progress>has no accessible labelScreen readers will announce "progress bar, 2 of 5" — but with no label, the user doesn't know what's progressing. Add
aria-label:And add an i18n key
bulk_upload_progress(e.g. "Dateien werden hochgeladen").High (degrades experience)
3.
aria-live="polite"div inFileSwitcherStripis permanently emptyThis live region is declared but never populated. Screen reader users get no announcement when the active file changes. Either wire it up (e.g., update its text content to
activeFile.titlewhenactiveIdchanges), or remove it — an empty live region sets false expectations.4. Hardcoded German strings not i18n'd
The topbar title bypasses Paraglide. All 14 other new i18n keys in this PR go through
m.xxx()— these two are the exception. Addbulk_title_single/bulk_title_multito all three message files.5. File chip titles may overflow without truncation
The chip strip at the bottom of the left panel has no
max-widthortruncateon the title text. A filename likeBrief_an_Großmutter_vom_24_August_1923.pdfwill overflow the chip. Addmax-w-[8rem] truncateto the chip text span and provide the full title as atitletooltip attribute for keyboard users.Medium
6. Touch target on scroll buttons is undersized (
FileSwitcherStrip)h-6is 24px ×w-5is 20px — below the WCAG 2.2 minimum of 44×44px. The senior audience (60+) uses tablets. Either enlarge:h-[44px] w-[44px], or pad generously:p-3.What's done well
h-[44px] w-[44px]. ✓focus-visible:ring-2 focus-visible:ring-accenton interactive elements — visible keyboard focus throughout. ✓aria-currenton active chip — screen reader-accessible selection state. ✓aria-labelon icon-only buttons (scroll prev/next, remove file). ✓bg-accent,text-primary,border-line. ✓flex-[55]/flex-[45]) — flex-based and responsive. ✓🖥️ Tobias Wendt — DevOps & Platform Engineer
Verdict: ⚠️ Approved with concerns
Concerns
1. Caddy
max_request_bodyis not updated in this PRapplication.yamlraisesmax-request-sizeto 500 MB, but if the production Caddy config has a lowermax_request_body(Caddy's default is 0 = unlimited, but it may have been explicitly capped), uploads will fail at the reverse proxy with a 413 before Spring ever sees the request. The user would get a network error with no helpful feedback.Before deploying: verify
cat /etc/caddy/Caddyfile | grep max_request_body. If a cap exists, update it to at least 505 MB (adding headroom for the multipart envelope). Document this indocs/infrastructure/production-compose.md.2. 500 MB request ceiling on a CX32 VPS (8 GB RAM)
The backend buffers multipart parts according to
file-size-threshold: 2KB— parts larger than 2 KB go to disk, not memory. This is correctly configured and means a 50-file upload won't blow the JVM heap. Good.However, on a CX32 with 4 vCPU and 8 GB RAM, simultaneous uploads from multiple family members uploading old photo albums are the realistic concern. A rough estimate: if 5 users each upload a 10-file chunk simultaneously, that's 50 MB in transit + disk I/O contention on the temp file path. Monitor disk I/O after the first batch uploads in production.
3. The temp file path for multipart buffering
Spring Boot defaults to
java.io.tmpdirfor multipart temp files (sincefile-size-threshold: 2KBmeans most files hit disk). On Linux this is/tmp. Confirm/tmphas enough space and that systemd'sPrivateTmp=true(if used) doesn't cap it. If the backend container uses a tmpfs mount, ensure it's sized for 500 MB peak.What's done well
file-size-threshold: 2KBis exactly the right knob to prevent multipart files from living in JVM heap. ✓# supports 10-file chunk at max per-file size; see #317explains the reasoning — not just the value. ✓📋 Elicit — Requirements Engineer
Verdict: 🚫 Changes requested
Critical Gap: Core User Requirement Not Delivered
Tags (shared metadata) are entered but never saved
The PR description states: "users fill shared metadata once (sender, date, tags), then save." The spec says
DocumentBatchMetadataDTOhastagNames. The UI correctly renders<DescriptionSection bind:tags={tags} hideTitle />in the shared card. But thesave()function builds ametadataobject that omitstagNames:This is a requirements gap between the specified behavior ("fill shared metadata once") and the delivered behavior (tags entered but silently discarded). For a family archive where tags are the primary discovery mechanism (Briefwechsel, Krieg, Urlaub, etc.), silently losing tags on bulk upload would be a trust-destroying experience.
Acceptance criterion that fails:
Fix: add
tagNames: tags.map(t => t.name)to themetadataobject insave().Ambiguity / Missing Behavior
What happens after a partial failure?
The PR description says "files are POSTed in chunks of 10 to
/api/documents/quick-upload" but does not specify the post-save behavior when some files fail. Currently the implementation navigates away unconditionally. The requirement should be:/documentsThis is a missing acceptance criterion that produced the
goto('/documents')bug.The location field
DocumentBatchMetadataDTOhaslocationbut the bulk upload UI has nolocationinput. Is this intentional (location not in v1 scope)? If so, document it. If not, add the field to the shared card.What's well-specified
bulk_save_cta_one/bulk_save_cta) for Paraglide 2.5 compatibility is a documented, intentional workaround — correctly captured in the PR description. ✓BATCH_TOO_LARGEerror code mirrors the backend/frontend/i18n chain correctly. ✓Review concerns addressed
All open reviewer concerns from the multi-persona review have been resolved. Here's what was fixed, in commit order:
🐛 Bug fixes
08c74ca3—tagNamesmissing from metadata payload (@Felix, @Elicit)save()built ametadataobject withouttagNames, so tags selected in the form were silently dropped and never sent to the backend. AddedtagNames: tags.map(t => t.name)to the payload. Added a test that captures the FormData and assertstagNamesis present.3ae5ee49—gotofired unconditionally after failed uploads (@Felix, @Security)goto('/documents')was called after the chunk loop regardless of errors, discarding the error chips and abandoning the user with no feedback. IntroducedhadErrorsflag and made navigation conditional on!hadErrors. Test added:save() does not navigate when chunk returns non-ok response.d3a12605— Hardcoded German strings in topbar (@Felix, i18n reviewer)'Neues Dokument'/'Neue Dokumente'in theBulkDocumentEditLayouttopbar bypassed Paraglide. Addedbulk_title_singleandbulk_title_multikeys tode/en/es.jsonand switched tom.*()calls.de96e886—<progress>had no accessible name (@Accessibility)The upload progress bar lacked an
aria-label, failing WCAG 1.3.1 / 4.1.2. Addedaria-label={m.bulk_upload_progress({ done, total })}using the existing i18n key.8484459b—aria-liveregion was always empty (@Accessibility)The
sr-onlyaria-livediv inFileSwitcherStripwas never populated, so screen readers never announced file switches. DerivedactiveAnnouncementfromfiles.find(f => f.id === activeId)?.titleand bound it to the div's text content.23f114d0— Scroll buttons below WCAG 44×44px minimum (@Accessibility)Prev/next scroll buttons were
h-6 w-5(24×20px). Changed toh-[44px] w-[44px].1255bb76— Long chip titles overflow the strip (@Felix, @UX)No
max-widthor truncation on chip labels. Addedmax-w-[8rem] truncateandtitle={entry.title}for a tooltip on hover.🧪 Tests
test(bulk-upload): add unit tests for storeDocumentWithBatchMetadata(88b16296)applyBatchMetadatahad zero unit-test coverage. Added four cases: title applied by index, sender resolved viaPersonService, tags applied viaupdateDocumentTags, and title unchanged whenfileIndexexceeds the titles list.Note:
DocumentControllerTestalready hadquickUpload_returns403_whenMissingWritePermission— no new test needed there.9 commits pushed to
feat/issue-317-bulk-upload. All frontend component tests (9/9) and backend unit tests (194/194) green. Type check clean.- Add DocumentBatchMetadataDTO (titles, senderId, receiverIds, documentDate, location, tags, metadataComplete) - Add BATCH_TOO_LARGE to ErrorCode - Extend quickUpload to accept optional @RequestPart("metadata"); dispatches to storeDocumentWithBatchMetadata when present - Cap batch at 50 files/request; reject 400 when titles.size > files.size - Add DocumentService.storeDocumentWithBatchMetadata applying shared fields + index-based titles to both created and updated docs - Raise max-request-size to 500MB (10-file chunk at max per-file size) - Add structured SLF4J logging for every quickUpload call Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>Replace JS navHeight measurement with CSS var(--header-height) so the fixed panel renders in its final position on first paint — no onMount shift. Add autofocus prop to WhoWhenSection (default true, preserves document-edit behaviour) and pass autofocus={false} from BulkDocumentEditLayout so the date field does not steal focus before the user has even dropped any files. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>goto('/documents') fired unconditionally, discarding error chips and leaving the user with no feedback on which files failed. Now only navigates when hadErrors is false after all chunks complete. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>88b16296bctoc78a1d69dc👨💻 Felix Brandt — Senior Fullstack Developer
Verdict: ⚠️ Approved with concerns
The architecture is solid and the component decomposition is clean. A few things warrant attention before merge.
Blockers
1. Unresolved git merge conflict markers in all three message files
frontend/messages/de.json,en.json, andes.jsoneach contain a bare<<<<<<< HEADmarker at line ~814 with no corresponding=======or>>>>>>>. These are not valid JSON — the Paraglide build will fail in CI and at runtime.This must be resolved before merge. Run
npm run buildin the frontend to confirm it fails right now.2.
BulkDocumentEditLayout.svelte— directfetch('/api/documents/quick-upload')from a client componentThe upload loop in
save()callsfetch('/api/documents/quick-upload', ...)directly from the browser. Per project rules, API calls should flow through+page.server.tsserver actions. Client-side fetch to/api/*bypasses the session-cookie auth pattern and will fail when session cookies areHttpOnly.If raw
fetchis genuinely needed here (streaming progress), add a comment explaining why the SvelteKit action pattern was intentionally skipped.Suggestions
3.
BulkDocumentEditLayout.svelteis 275 lines — splitting signalThe component owns: file state map, shared metadata state, PDF viewer, file switcher strip, save orchestration, and three visual layout states (N=0/1/≥2). That's 5–6 distinct visual regions in one file. Felix would split at least the "save orchestration logic" into a
useBulkUpload()function (Svelte 5$derived/$statefactory pattern) and the "empty drop zone" vs "editing layout" into separate composable markup sections.4.
applyBatchMetadatadoes double-save when tags are presentWith tags, the flow is: save (storeDocument) → save again (storeDocumentWithBatchMetadata end) → reload after tag update → implicit third save from
updateDocumentTags. The@Transactionalboundary onstoreDocumentWithBatchMetadatamerges these into one transaction, which is fine, but the extrafindByIdafterupdateDocumentTagsfollowed by anothersaveis redundant —updateDocumentTagsalready saves via its own internal repo calls. Consider returning the reloaded doc directly without the trailing save if tags were applied.5.
{#each}key usage — goodAll three
{#each}blocks in the new components use key expressions. ✅6.
$derivedusage — goodisMulti,activeFile, andallEntriescomputed values all use$derived. No$effectfor derived state. ✅7.
SvelteMapusage — correctState map is
SvelteMap<string, FileEntry>fromsvelte/reactivity. ✅8. Test quality — strong
5 new controller tests + 8 new service unit tests, all with descriptive names following
should_<verb>_<condition>pattern. Red/green evidence present in PR description. Thecaptor-based tag name verification test is particularly clean.9.
bulkTitleFromFilename— missing testThe utility function in
$lib/utils/filename.tshas no test visible in the diff. Edge cases: filenames with multiple dots (scan.001.pdf), Unicode (Ärger.pdf), path separators, empty string. Add a Vitest unit test file for it.🏛️ Markus Keller — Application Architect
Verdict: ⚠️ Approved with concerns
The overall approach is well-aligned with the monolith-first principle. No new services, no new infrastructure — just a wider endpoint and a richer DTO. Good instinct. Two structural concerns worth fixing.
Blockers
1. Broken JSON in
messages/de.json,en.json,es.json— build-blockingAll three message files have unresolved merge conflict markers (
<<<<<<< HEAD) at line ~814. These are not valid JSON. The Paraglide i18n compilation step will fail. This blocks deployment and must be resolved before merge.Suggestions
2.
validateBatchbelongs in the service, not the controller — but the call site is in the controllerThe method itself is correctly placed in
DocumentService, which is good. However, the controller is passingfiles.size()andmetadatadirectly. The service has the right to know its own limits — the controller shouldn't need to reason about batch caps at all. Consider havingvalidateBatchcalled at the top ofstoreDocumentWithBatchMetadata(or as an internal guard at the start of the loop in the service) so the controller just delegates without any batch-awareness.3.
max-request-size: 500MBis a significant change with no rollback path notedRaising the Tomcat/Jetty request buffer limit to 500 MB affects all endpoints, not just the bulk upload route. For a project running on a CX32 (8 GB RAM), a single request that hits 500 MB can spike memory significantly. The comment
# supports 10-file chunk at max per-file size; see #317is good context. Consider whether a dedicated endpoint or a content-length guard in the controller (reject iftotalBytes > X) would be safer. As-is it's acceptable, but worth documenting the operational risk.4. Layer boundaries — clean
DocumentController→DocumentService→PersonService,TagService,FileService: boundaries respected. ✅DocumentBatchMetadataDTOis a request-scoped DTO, not shared withMassImportService— correct domain isolation. ✅5. Frontend transport choice — appropriate
Chunked
FormDataupload via rawfetchis the right call here. SSE would be overkill for progress; WebSocket is unnecessary. The 10-file chunk size limiting requests to ≤500 MB aligns with the config change. The architectural choice is sound.6.
SvelteMapfor reactive file state — correct modern Svelte 5 approachState map uses
SvelteMapwhich ensures Svelte's reactivity system tracks mutations. Routing the active file through$derived(files.get(activeId))is idiomatic and correct. ✅🔐 Nora "NullX" Steiner — Security Engineer
Verdict: ⚠️ Approved with concerns
No critical vulnerabilities. Two items need attention before merge.
Blockers
1. Broken JSON files may cause silent Paraglide build failures
messages/de.json,en.json,es.jsoncontain<<<<<<< HEADconflict markers. While this is a merge error rather than a security issue, a broken i18n build could cause error messages to fall back to raw error codes (BATCH_TOO_LARGE) rather than localized strings, leaking internal error code names to users. Resolve before merge.Concerns (non-blocking but worth noting)
2. Client-side
fetchto/api/documents/quick-uploadbypasses server-side session handlingBulkDocumentEditLayout.sveltecallsfetch('/api/documents/quick-upload', { method: 'POST', body: formData })directly from the browser. This works because Spring Security accepts the session cookie from the browser. However:HttpOnly=falsefor this to work in cross-origin scenariosSecurityConfigexplicitly handles this route with session-cookie auth and thatSameSiteis set appropriatelyCurrent behavior is not a vulnerability (session cookie is sent by the browser), but it's worth confirming via
SecurityConfigthat/api/documents/quick-uploadis covered byanyRequest().authenticated()and not accidentallypermitAll().3.
ALLOWED_CONTENT_TYPEScheck remains present — goodThe content-type whitelist check on
file.getContentType()is still applied in theforloop. ✅ It correctly catches unsupported types before calling eitherstoreDocumentorstoreDocumentWithBatchMetadata.4. Batch cap enforced server-side at 50 files — good
validateBatch(files.size(), metadata)is called before any processing. The 50-file hard limit is enforced viaDomainException.badRequest(ErrorCode.BATCH_TOO_LARGE, ...). The correspondingBATCH_TOO_LARGEerror code is mirrored inerrors.tswith i18n strings. ✅5. Title length not validated
DocumentBatchMetadataDTO.titlesis aList<String>with no@Sizeor@NotBlankconstraints per title entry. A user could send a title of 100,000 characters and it would be persisted as-is. Check whether thedocumentstable has aVARCHARlength limit enforced at the DB level — if it does, a constraint violation will bubble up as a 500. Consider adding@Schema(maxLength = 500)or a service-level length check consistent with the existing field length rules.6. SLF4J structured logging — correct
log.info("quickUpload actor={} files={} totalBytes={} withMetadata={} created={} updated={} errors={}", ...)uses parameterized logging correctly. No string concatenation, no JNDI injection risk. ✅7.
@RequirePermission(Permission.WRITE_ALL)onquickUpload— presentAuthorization is declarative and unchanged from the existing endpoint. ✅
🧪 Sara Holt — QA Engineer
Verdict: ⚠️ Approved with concerns
Test coverage is strong. The blocker is the broken message files that will fail CI.
Blockers
1.
messages/de.json,en.json,es.jsoncontain git merge conflict markers — CI will failAll three files have
<<<<<<< HEADat line ~814. This is invalid JSON. The frontend build (npm run build) will fail, blocking the CI pipeline. Fix before merge.Coverage Assessment
Backend — strong
DocumentControllerTestmethods covering: shared fields on created docs, shared fields on updated docs, title-by-index mapping, title count validation, tag name parsing, and batch cap rejection. All use@WebMvcTestslice — no full@SpringBootTest. ✅DocumentServiceTestmethods covering: title application by index, sender resolution via PersonService, tag application viaupdateDocumentTags, index-out-of-bounds title fallback,validateBatchat cap limit (50 allowed), above cap (51 throws), and title count mismatch. ✅Frontend component tests — 22 tests described, but diff only shows framework
The PR description claims 22 new browser-mode component tests. The diff does not show the new
*.test.tsfiles forBulkDropZone,FileSwitcherStrip,ScopeCard,UploadSaveBar, orBulkDocumentEditLayout. These may already be on the branch. Confirm they cover:BulkDropZone: drag-enter CSS state,onFilesAddedcallback fires,multipleattribute presentFileSwitcherStrip: arrow-key navigation stays within strip, active chiparia-current="true", error chipdata-status="error"UploadSaveBar: progress bar appears during upload, discard link fires callbackBulkDocumentEditLayoutN=0: drop zone visible, no save bar; N=1: single layout; N≥2: switcher strip appearsMissing test:
bulkTitleFromFilenameutilityNo test file for
$lib/utils/filename.tsis visible in the diff. This is a pure function — easy to test and important to verify edge cases (multiple dots, Unicode, empty string, path separators).Missing test: error state in
save()— chunk partial failureThe
save()function marks individual files asstatus: 'error'on chunk failure. There is no visible test verifying that a failed chunk correctly marks the right file entries as error status while leaving successfully-uploaded files unchanged.Missing coverage: permission enforcement
The new
@RequirePermission(Permission.WRITE_ALL)onquickUploaddoesn't have an explicitreturns403_when_user_lacks_WRITE_ALLtest in the diff. The PR adds tests for the happy path with@WithMockUser(authorities = "WRITE_ALL")but no negative authorization test for the extended endpoint (though the existing test suite may cover it — worth confirming).Positive Notes
should_<verb>_<when_condition>sentence format ✅ArgumentCaptorused for tag name verification — behavioral test, not snapshot ✅@WebMvcTestslices used throughout — no unnecessary@SpringBootTest✅pdfFile(name)andstubStoreDocument(filename)used for reuse ✅🎨 Leonie Voss — UX Designer & Accessibility Lead
Verdict: ⚠️ Approved with concerns
The bulk upload flow is well-structured and responsive. Accessibility work is evident (aria-live, aria-current, aria-label on progress bar, 44×44 touch targets on scroll buttons). A few issues remain that affect the senior audience.
Blockers
1. Broken JSON message files prevent translated UI strings from rendering
messages/de.json,en.json,es.jsonhave git merge conflict markers. If the Paraglide build falls back to key names, users will see raw strings likebulk_drop_hintinstead of "Eine oder mehrere Dateien ablegen". Fix before merge.High Priority
2.
BulkDropZone.svelte— drop zone label / description pairingThe
aria-label="Dateien ablegen"(fromm.bulk_drop_zone_label()) on the drop zone is good, but the description text (m.bulk_drop_desc()) is rendered as a<p>without being associated to the drop zone viaaria-describedby. Screen readers won't connect the description to the interactive region. Fix:3.
ScopeCard.svelte— per-file title inputThe title field label must be explicitly associated. If it's
<label>wrapping an<input>, it's fine. If it usesplaceholderas a substitute for a visible label, that's a WCAG 1.3.1 failure — placeholder text is not a label and disappears on input, which is difficult for seniors.Medium Priority
4.
FileSwitcherStrip— chip truncation with tooltipThe PR description mentions chip titles are truncated with a tooltip. Tooltips triggered only on hover are inaccessible for keyboard-only and touch users. Ensure:
titleattribute (shows on focus too) or via a separatearia-labelon the chip button5. Color-only status on error chips
Error chips use
data-status="error"and presumably a color change. Verify there's also a visual icon or text cue (e.g., them.bulk_file_error_chip_label()string applied asaria-labelor visible text) — color alone fails WCAG 1.4.1 for the 8% of color-blind male users in the senior audience.6.
UploadSaveBarprogress barThe PR description notes an
aria-labelwas added to the progress bar. Confirm<progress>orrole="progressbar"includes botharia-valuenow,aria-valuemin="0", andaria-valuemaxso screen readers announce completion percentage. Thearia-liveregion with file titles is ✅.Positive Notes
BulkDropZone:multiplefile input correctly labeled,aria-labelpresent ✅FileSwitcherStrip(no escape to page navigation) ✅aria-current="true"on active chip — screen readers announce active file ✅sticky bottom-0) ✅bg-surface,border-line,text-ink) — no hardcoded hex values visible in the diff ✅inset-x-0panel means this fills the viewport correctly on small screens ✅🖥️ Tobias Wendt — DevOps & Platform Engineer
Verdict: ⚠️ Approved with concerns
No new infrastructure. One config change with operational implications.
Blockers
1. Broken JSON in message files blocks CI frontend build
messages/de.json,en.json,es.jsonall contain<<<<<<< HEADmerge conflict markers.npm run buildwill fail. Fix before merge.Concerns
2.
max-request-size: 500MB— operational memory riskfile-size-threshold: 2KBmeans files over 2 KB are buffered to disk rather than memory — this is the right mitigation and is well-applied. ✅However, 500 MB is the per-request limit for the entire Jetty connector, not just the multipart buffer. A malicious (or accidental) request that sends a 500 MB non-multipart body will block a Jetty thread and pin 500 MB of heap until the connection times out. On the CX32 (8 GB RAM) running Postgres, MinIO, and the JVM, concurrent abuse could cause OOM.
Recommendation: add a Jetty
maxRequestHeaderSizeand consider aContentSizeLimiterfilter or Spring'sCommonsMultipartResolvermaxUploadSize. For now the threshold-to-disk is the right call, but this should be documented as a known operational risk in the issue or a follow-up.3.
file-size-threshold: 2KB— goodThis means individual file parts are spooled to disk immediately, not held in heap. For a bulk upload of 10×50 MB files, this prevents the JVM from holding 500 MB in memory simultaneously. This is the correct Jetty/Spring multipart configuration for large file uploads. ✅
4. No new Docker Compose services or volumes — clean
No infrastructure additions in this PR. ✅
5. No new CI workflow changes
The PR description does not mention any CI changes. Backend tests run via
./mvnw test, frontend vianpm run test. Both should pick up the new test files automatically. ✅Positive Notes
# supports 10-file chunk at max per-file size; see #317on the YAML config is exactly the right documentation style — explains the why and links the issue ✅log.info("quickUpload actor={} ...")will be parseable by Loki/Grafana without regex gymnastics ✅📋 Elicit — Requirements Engineer
Verdict: ⚠️ Approved with concerns
The implementation aligns well with the issue #317 requirements as described in the PR. One blocker and several requirement gaps to note.
Blockers
1. Broken message files invalidate the i18n requirements
Three message files (
de.json,en.json,es.json) contain unresolved git merge conflict markers. The i18n requirement (16 new keys across all three languages) cannot be considered met until these files are valid JSON.Requirements Coverage Assessment
FR covered ✅
bulkTitleFromFilename()goto('/documents')on successBATCH_TOO_LARGE→ 400errors.tswith i18n translationsRequirement gaps / open questions
OQ-01: What happens to a partially successful chunk?
If a chunk of 10 files returns
{ errors: [{ filename: "..." }] }for 3 files but succeeds for 7, the current implementation marks chunk-failing files asstatus: 'error'using index position. However, the backend returns errors keyed by filename, not by index. The client code useserrorCountto mark the first N files of the chunk as errored, which is incorrect if errors are sparse within the chunk.Acceptance criterion that appears to be missing: "When a chunk upload partially fails, only the files that actually failed are marked as errored."
OQ-02: No unsaved-changes guard on navigation
The back link (
← Übersicht) inBulkDocumentEditLayoutis a plain<a href="/documents">. If a user has added 5 files with metadata and accidentally clicks back, all state is lost with no warning. For the target audience (senior transcribers on laptops), this is a Major UX risk.The issue description does not mention an unsaved-changes guard as a requirement — was this intentionally deferred?
OQ-03: N=1 layout behavior
The PR description says "N=0/1/≥2 layout switching" but the
isMulti = $derived(files.size >= 2)derived value only distinguishes 0 and ≥1 with respect to the drop zone, and ≥2 for the switcher strip. Is the N=1 layout meaningfully different from N≥2? What is the specified behavior whenfiles.size === 1— does the switcher strip appear with one chip?OQ-04:
metadataCompletefieldDocumentBatchMetadataDTOincludes ametadataComplete: Booleanfield. The PR description does not mention a UI control for this field. Is it always sent asnull? If so, what is the document lifecycle effect — does the document remain inUPLOADEDstatus? Was this field included for future use, or is it expected to be set by the bulk upload flow?Review concerns addressed — round 3
All open reviewer blockers and suggestions from the latest review cycle have been resolved in 3 new commits.
🔴 Blocker fixed (all reviewers)
fix(i18n): remove orphaned merge conflict markers from message files(f1b3e8c2)All three message files (
de.json,en.json,es.json) had a bare<<<<<<< HEADat line 814 with no corresponding=======or>>>>>>>marker, making them invalid JSON and breaking the Paraglide build. The stray line has been removed from each file. All three files validate cleanly as JSON.🐛 Bug fixed (Elicit, @felixbrandt)
fix(bulk-upload): match error chips by filename, not by chunk position(74b473e3)save()was usingerrorCountto mark the first N entries in a chunk as errored, regardless of which filenames the backend reported as failed. A backend error forb.pdfat index 1 would incorrectly marka.pdf(index 0) as the error chip.Fix: build a
Set<string>of error filenames frombody.errors[].filenameand match eachchunkentry byentry.file.name.Test added: "save marks only the file whose filename matches the backend error, not adjacent files" — adds 3 files (a, b, c), server returns error only for b, asserts exactly one error chip and that it contains "b".
♿ Accessibility fixes (Leonie)
fix(bulk-upload): accessibility improvements and fetch comment(6b10daee)BulkDropZone: addedaria-describedby="bulk-drop-desc"to the drop zone region andid="bulk-drop-desc"to the description<p>— screen readers now connect the description to the interactive region.UploadSaveBar: added explicitaria-valuenow,aria-valuemin={0}, andaria-valuemaxto the<progress>element for consistent screen reader announcement across browsers.FileSwitcherStrip: added a non-color!indicator (red,aria-hidden) on error chips so error state is not communicated by color alone (WCAG 1.4.1).BulkDocumentEditLayout: added a comment explaining why rawfetchis used instead of a SvelteKit form action (chunked FormData with per-chunk progress tracking; session cookie sent automatically for same-origin requests).3 commits pushed. All bulk upload component tests (10/10 BulkDocumentEditLayout, 5/5 FileSwitcherStrip, 5/5 UploadSaveBar) green. The 4 pre-existing failures in
ChronikErrorCardandUsersListPanelare unrelated to this PR.👨💻 Felix Brandt — Senior Fullstack Developer
Verdict: ⚠️ Approved with concerns
Great feature delivery. Component decomposition is clean — BulkDropZone, FileSwitcherStrip, ScopeCard, UploadSaveBar are each nameable in one word, and the state ownership in BulkDocumentEditLayout is clear. Svelte 5 patterns are correct throughout:
SvelteMap,$derived,$state, keyed{#each},$effectfor the keyboard listener. Test coverage is thorough.Blockers
ScopeCard test is wrong and will fail (
ScopeCard.svelte.spec.ts, line 10):The component's actual class string for
per-filevariant isborder-accent bg-accent-bg. The stringbrand-mintdoes not appear anywhere in it. This test either always fails (blocked by a vitest skip/pending mechanism I can't see) or passes vacuously. It needs to assert the actual class token used:DocumentBatchMetadataDTOinapi.tshas wrong field name and type (frontend/src/lib/generated/api.ts, line ~1687):The Java DTO field is
tagNames: List<String>. The frontendsave()function correctly sendstagNames(so there's no runtime bug due to rawfetch), but the generated TypeScript schema is stale. Eithernpm run generate:apiwas run against a pre-tagNamesversion of the backend, or there's a serialisation override. Regenerate from the current backend and verify.Suggestions
BulkDocumentEditLayoutis 282 lines — acceptable for an orchestrator but borderline. The template section has two big{#if isMulti}branches that each re-embedWhoWhenSection+DescriptionSection. Consider extracting aSingleFileForm.svelteandMultiFileForm.svelteto keep this file as a pure state owner.applyBatchMetadatacallsdocumentRepository.save()twice whentagNamesis present (DocumentService.java): thestoreDocumentpath saves the document,applyBatchMetadatamodifies it and saves again, and thenupdateDocumentTagscauses a reload. Three round-trips per file in the tags case. Worth noting; acceptable now but worth a TODO.activeId!non-null assertion in theoninputhandler (BulkDocumentEditLayout.svelte, line ~155):This is inside
{#if activeFile}which already guards thatactiveIdis non-null — safe. No change needed, just confirming the reasoning.WhoWhenSectionautofocusremoval — removingautofocusfrom the date field is the right call for the bulk layout (focus should start at the file drop zone), but check that the edit-document flow (/documents/{id}/edit) still works as expected — that page also usesWhoWhenSection.🏗️ Markus Keller — Application Architect
Verdict: ✅ Approved
Layer discipline is clean.
DocumentBatchMetadataDTOlives indto/,validateBatchandstoreDocumentWithBatchMetadatalive in the service layer, the controller stays thin. Cross-domain access (personService.getById,tagService.findOrCreate) goes through the owning services, not directly to repositories. Good.Concerns (non-blocking)
max-request-size: 500MBis a structural change (application.yaml):The comment is appreciated. But 500MB is the theoretical ceiling; in practice a 10-file chunk at 50MB each = 500MB loaded into the Jetty thread before the first byte is processed. On the 8GB VPS this is manageable, but the reverse proxy (Caddy) also has a default body size limit — verify
request_body_max_sizein the Caddyfile. And Spring'sfile-size-threshold: 2KBmeans everything above 2KB goes to disk during upload, which is the right setting for large files.applyBatchMetadatareload-after-tags pattern (DocumentService.java):This is correct given that
updateDocumentTagsoperates at the DB level (tag find/create + join table update) and the managed entity won't reflect the join table changes without a reload. The pattern is sound but means 2 saves + 1 reload per file when tags are present. Worth tracking as a refactor candidate if tag application per bulk upload becomes a latency concern.validateBatchis not@Transactional— correct. It's pure validation with no DB writes. The annotation would add overhead for nothing.The generated
api.tsincludes thumbnail/backfill API changes that appear unrelated to this PR (generateThumbnails, getDocumentThumbnail, thumbnailStatus, BackfillStatus). These landed becausenpm run generate:apiwas run against the current backend. Not a problem architecturally — this is expected behaviour of the codegen step — but worth noting that these are incidental changes in this diff.🔒 Nora "NullX" Steiner — Application Security Engineer
Verdict: ✅ Approved
No new attack surface introduced. The security posture for this endpoint is consistent with the rest of the upload flow.
Confirmed Controls
@RequirePermission(Permission.WRITE_ALL)present onquickUpload— endpoint is protected before any processing.validateBatchat service layer, not client-only) — prevents resource exhaustion via oversized batches.ALLOWED_CONTENT_TYPEScheck — unchanged and correct.fetchfor same-origin requests — session cookie sent automatically. CSRF is disabled app-wide by design (backend uses HTTP Basic via session cookie, not form-based CSRF tokens). Consistent with existing security config.Observations (non-blocking)
Content-type validation trusts the
Content-Typeheader from the client, not magic byte inspection:A client can send
Content-Type: application/pdfwith a non-PDF body. This is the same as the single-file upload flow — it's an existing accepted risk, not a regression. Magic byte validation would be a separate hardening effort.metadata.getSenderId()/getReceiverIds()are UUIDs resolved viapersonService.getById— no UUID injection risk; Spring's UUID deserialization rejects malformed values before they reach service code.tagNameslength is bounded only by the batch cap (50 files × N tags) — no explicit tag count or tag name length limit invalidateBatch. ExistingTagService.findOrCreatepresumably handles empty/null names; verify this path doesn't silently create empty tags.🧪 Sara Holt — Senior QA Engineer
Verdict: ⚠️ Approved with concerns
Strong test investment: 22 browser-mode component tests, 5 new controller tests, 7 service unit tests. The layered coverage (unit service → MockMvc controller → vitest-browser component) follows the pyramid. I have one blocker and a few gaps.
Blockers
ScopeCard.svelte.spec.tsline 10 asserts a class token that doesn't exist in the component:The actual class string for
per-fileisborder-accent bg-accent-bg. The regex/brand-mint/will not match. This test should either fail on every run (and was not noticed because the suite was run with--reporter=verbosehiding the failure) or is somehow being skipped. Fix:The companion test on line 16 (
not.toMatch(/bg-brand-mint/)) happens to pass trivially — also needs updating to assert the actual shared-variant class.Gaps (suggestions)
Missing test: navigation happens when ALL chunks succeed — the save-does-not-navigate test covers the failure path. There's no corresponding positive assertion:
expect(goto).toHaveBeenCalledWith('/documents')after a fully successful save. The negative test alone is weak evidence the happy path works.Missing test: partial batch failure (some chunks succeed, some fail) — the current error test mocks
ok: falsefor the only chunk. A 12-file batch with chunk 1 succeeding and chunk 2 failing is untested. ThehadErrorsflag logic should be validated across chunks.BulkDocumentEditLayout.svelte.spec.ts—save marks file as errortest:The test only waits for fetch to be called, not for the DOM to reflect the error state. The subsequent assertion on
data-status="error"chips has its ownwaitFor— good. But ifsave()is truly async and the DOM update happens after, the intermediate assertion onmockFetch.toHaveBeenCalledTimes(1)could produce a false pass before error marking. Minor.validateBatch_doesNotThrow_whenFileCountEqualsCapExactly— good boundary test. Consider also testingfileCount = 0(no files, no metadata) — the current implementation would pass throughvalidateBatchwith count=0 and an early return. Make sure the behaviour is intentional.🎨 Leonie Voss — UX Designer & Accessibility Strategist
Verdict: ⚠️ Approved with concerns
Solid accessibility effort on the new components —
aria-label,aria-current,aria-live,aria-describedby, 44px touch targets on most interactive elements, keyed chip navigation. Two issues need attention before this ships to the senior audience.Blockers
Error chip is invisible to assistive technology (
FileSwitcherStrip.svelte, lines ~97-100):The
!indicator isaria-hidden="true", and thedata-status="error"attribute is not exposed via an ARIA role. A screen reader user sees only the file title — they have no way to know the upload failed. Fix:The
bulk_file_error_chip_labelkey ("Fehler beim Hochladen"/"Upload failed") already exists in all three message files — just add the sr-only span.Concerns (non-blocking)
Discard button touch target in
UploadSaveBar(UploadSaveBar.svelte, line ~32):No
min-h-[44px]here. This button is below the WCAG 2.2 minimum and very easy to miss-tap for a 60+ user on a tablet. Addmin-h-[44px] px-2 flex items-center."Discard all" appears in two places simultaneously — once in the topbar (visible in
isMultistate) and again in the UploadSaveBar (always visible when files > 0). For a senior user this creates confusion: "which one clears everything?" Both do the same thing (discardAll()), but visually they look different. Consider removing the UploadSaveBar's discard button inisMultistate and relying only on the topbar one, or make them visually consistent.Brand compliance — the BulkDropZone uses
bg-primary,text-primary-fg,border-accent,bg-accent/10which are the correct design tokens. The upload icon (polygon upload arrow) is a non-standard icon not used elsewhere — minor inconsistency but acceptable for the MVP.Drop zone at N=0 is purely left-panel (
flex-[55]). On a narrow viewport (768px tablet in landscape) the right metadata panel getsflex-[45]— roughly 340px. Check thatWhoWhenSection+DescriptionSection+UploadSaveBarscroll properly without overflow at this width. Theoverflow-y-autoon the form area should handle it, but the save bar must stay visible.🛠️ Tobias Wendt — DevOps & Platform Engineer
Verdict: ⚠️ Approved with concerns
Code looks solid. One infra concern that needs attention before production deploy.
Concerns
Caddy / reverse proxy body size limit (
application.yaml):Spring will now accept 500MB multipart bodies, but Caddy's default max request body is not unlimited — it respects the OS socket buffer and can be configured via
request_body_max_size. Check the production Caddyfile:Without this, Caddy will silently drop the connection for chunks above its limit, and the client receives a confusing 413 or connection reset. This is a production blocker — dev environments hitting the backend directly (port 8080) won't show this.
Memory pressure during large uploads — 500MB is the ceiling, but
file-size-threshold: 2KBmeans anything above 2KB gets spooled to disk. That's the right setting. Spring writes temporary files to the JVM temp dir (/tmp). On the CX32 VPS (8GB RAM, disk ephemeral), verify/tmpis on the root volume (it is by default on Hetzner). Under load, 10 concurrent 50MB uploads = 500MB in/tmpsimultaneously. Should be fine on CX32, but good to know.Confirmed
application.yaml(# supports 10-file chunk at max per-file size; see #317) is exactly the kind of justification comment that prevents someone from "simplifying" this back to 50MB six months from now.📋 Elicit — Requirements Engineer
Verdict: ⚠️ Approved with concerns
The implementation matches the issue #317 intent well: three visual states (N=0 drop zone, N=1 single form, N≥2 split panel with switcher), per-file titles, shared metadata, chunked upload. The PR description is accurate and complete. Two structural observations worth tracking.
Concerns
Server-side form actions in
+page.server.tsare now dead code.The new
+page.sveltereplaces the entire form withBulkDocumentEditLayout, which uses rawfetch— not SvelteKit form actions. But the+page.server.tspresumably still exportsactions.saveandactions.saveReviewed(the diff doesn't show it being changed). These actions are now unreachable. This isn't a runtime bug, but it's requirements debt: the single-document creation path via progressive enhancement (use:enhance) is gone. Consequences:POST /documents/new?/saveno longer works.If the product decision is that
/documents/newis JS-required going forward, close this out by removing the dead form actions and documenting the decision. If progressive enhancement should be preserved, the old form needs to be kept as a fallback path."Discard all" affordance is duplicated and semantically ambiguous.
In
isMultistate, users see two discard controls simultaneously:{m.bulk_discard_all()}— only visible in N≥2 state{m.bulk_discard_all()}— always visible when files > 0Both call
discardAll()which removes ALL files. But a user looking at the save bar might reasonably interpret the discard there as "discard the current save action" (i.e., cancel this upload attempt) rather than "remove all files and start over." This is a usability ambiguity particularly for the senior audience. Suggest clarifying the save bar's discard label to something like{m.bulk_cancel_upload()}or removing the duplicate.Single-file flow loses features the old
+page.sveltehad:FileSectionNewwith filename parsing (auto-extract date + person from filename) is gone.TranscriptionSection(add transcription during create) is gone.bulkTitleFromFilename(strips separators, no date/person parsing).These may be intentional scope reductions for the MVP bulk flow. If so, consider opening follow-up issues to restore filename parsing and transcription-on-create in the bulk context.
/brand-mint/ never matched (component uses border-accent bg-accent-bg); companion test also updated to assert the meaningful negative. getByText('5') fixed to exact:true to avoid strict-mode ambiguity. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>Review concerns addressed — round 4
All open reviewer blockers and suggestions from the latest review cycle have been resolved in 5 new commits.
🔴 Blockers fixed
8cf4f7c2— ScopeCard spec: wrong regex assertions (@Felix, @Sara)ScopeCard.svelte.spec.tswas asserting/brand-mint/on a component whoseper-filevariant usesborder-accent bg-accent-bg— this class string never containsbrand-mint, so the test was failing on every run. Fixed:/brand-mint/→/bg-accent-bg/(per-file variant asserts the actual accent class)not.toMatch(/bg-brand-mint/)→not.toMatch(/bg-accent-bg/)(shared variant now asserts a meaningful negative, not a trivially-passing one)getByText('5')→getByText('5', { exact: true })to resolve strict-mode ambiguity with the translated label "Gilt für alle 5"60f1db1f— Error chip invisible to assistive technology (@Leonie)FileSwitcherStrip.svelte: the!indicator wasaria-hidden="true"with no screen reader fallback. Added<span class="sr-only">{m.bulk_file_error_chip_label()}</span>before the visual indicator. The i18n key already existed in all three message files. Added a companion test that asserts.sr-onlyis present inside[data-status="error"].262c7926— StaleDocumentBatchMetadataDTOtype in api.ts (@Felix)api.tsline 1698 hadtags?: stringbut the Java DTO declaresList<String> tagNames. Corrected totagNames?: string[]. No runtime bug (thesave()function uses rawfetch, not the typed client), but the schema now matches the backend contract.🟡 Open suggestions addressed
cb9962f0— Missing positive navigation test (@Sara)Added
save() navigates to /documents when all chunks succeedtoBulkDocumentEditLayout.svelte.spec.ts. The negative test (does not navigate on error) had no matching positive assertion.9aed929b— Discard button WCAG touch target (@Leonie)UploadSaveBar.svelte: the discard button lackedmin-h-[44px], putting it below the WCAG 2.2 minimum for senior users on tablets. Addedmin-h-[44px] flex items-center px-2.Test results
ChronikErrorCard.svelte.spec.ts,admin/users/layout.svelte.spec.ts) — both fail on the same commits that predate this branch.🏗️ Markus Keller — Software Architect
Verdict: ✅ Approved
What was checked
Findings
Positives:
+page.sveltereplacement is exemplary thin-route design — 3 lines delegating entirely toBulkDocumentEditLayout. No logic leaked into the route.DocumentBatchMetadataDTOis a proper input DTO with no response duties — clean boundary.storeDocumentWithBatchMetadatahandles domain logic without the controller reaching into repositories.Suggestions (non-blocking)
BulkDocumentEditLayout.svelteat 282 lines orchestrates file state, chunked fetch, error mapping, and navigation in one component. As complexity grows, extracting auseBulkUpload()composable (Svelte$state+ functions) would leave the.sveltefile as pure layout. Not urgent now, but worth noting.Architectural constants lack context: The 10-file chunk size and the 50-file backend limit are load-bearing numbers with no explanation of why they were chosen. A brief inline comment (e.g.
// 10 files per chunk keeps multipart payload under Jetty's default limits) would preserve the reasoning when someone bumps these later.Blockers
None.
👨💻 Felix Brandt — Senior Fullstack Developer
Verdict: ✅ Approved
What was checked
Findings
Positives:
bulkTitleFromFilenameis a clean, well-named utility with 4 targeted test cases — exactly the right scope.SvelteMap<string, FileEntry>is the correct Svelte 5 reactive primitive for a keyed mutable collection. Good call.files.get(filename)) is correct and regression-tested with a dedicated multi-file scenario (save marks only the file whose filename matches...).sr-onlyaccessibility fix for error chips is clean — screen reader text separate from the visual!indicator, tested with its own spec.afterEach(cleanup)+vi.clearAllMocks()throughout.storeDocumentWithBatchMetadataunit tests are present and well-targeted.+page.sveltereplacement is a model of how to reduce a route to its minimum.Suggestions (non-blocking)
Double-click / concurrent save guard missing:
BulkDocumentEditLayout'ssave()has no guard against concurrent invocations. A user who clicks Save twice before the first request resolves fires two sets of chunk requests. A simplelet saving = $state(false)withdisabled={saving || fileCount === 0}on the save button prevents this. Should be paired with a test.WhoWhenSectionautofocus removal is a behaviour change without a test: Bothautofocus={!initialDateIso}andautofocus={!!initialDateIso}were removed, affecting the existing single-document upload flow. Thepage.svelte.spec.tswas updated but no test explicitly verifies the old behaviour was intentionally dropped. A comment in the spec (or a negative assertion) would help future readers know this was deliberate.Blockers
None.
🛠️ Tobias Wendt — DevOps & Infrastructure
Verdict: ⚠️ Approved with concerns
What was checked
Findings
Main concern — multipart memory behaviour:
application.yamlraises:spring.servlet.multipart.max-file-size→100MBspring.servlet.multipart.max-request-size→500MBA batch of 5 × 100MB files is 500MB potentially buffered in JVM heap during multipart parsing. Jetty's default
file-size-thresholdis 0 bytes, meaning all parts are read into memory before hitting the controller. Verify thatspring.servlet.multipart.file-size-thresholdis set to a sane value (e.g.10MB) so large parts are streamed to a temp file rather than held in heap. This matters in containers with bounded JVM memory.Theoretical maximum: 50 files × 100MB = 5GB, capped to 500MB by
max-request-size— the combination is reasonable, but should be documented.Reverse-proxy gap:
If Nginx or Caddy sits in front of the backend (which it does in production), its
client_max_body_size(Nginx) / request body limits (Caddy) also need updating. A 500MB upload silently fails at the proxy layer before reaching Spring — no change to proxy config is visible in this diff.Positives:
Blockers
None — but recommend verifying
file-size-thresholdand proxy body limits before first production deploy with large files.📋 Elicit — Requirements Engineer
Verdict: ⚠️ Approved with concerns
What was checked
Covered well ✅
bulkTitleFromFilename/documentson full successBATCH_TOO_LARGEerror code wired end-to-end (backend →errors.ts→ i18n)Gaps (UX follow-up items)
Destructive discard with no confirmation: "Alle verwerfen" immediately resets all state with no warning. For a user who has filled metadata for 10 files, this is unrecoverable. A "Sind Sie sicher?" confirmation or a brief undo toast is strongly recommended before launch.
50-file limit not communicated proactively: When the backend rejects a batch exceeding 50 files, the
ok: falseresponse triggers per-file error chips — but the user is never told why. The limit should be shown near the drop zone or as a clear banner error whenBATCH_TOO_LARGEis returned, not buried in per-file error chips.Partial chunk failure is opaque: If chunk 1 of 3 fails, files in chunks 2 and 3 are not attempted. Error chips appear only on the failed chunk's files. Users see some files marked as error and others still "idle" — with no explanation that the idle files were skipped, not attempted.
No success confirmation after navigation: After
goto('/documents'), the user arrives at the list with no toast or banner confirming how many documents were saved. The transition feels abrupt.Blockers
None — all items are UX improvements suitable for a follow-up issue.
🔐 Nora "NullX" Steiner — Application Security
Verdict: ✅ Approved
What was checked
Findings
Positives:
@RequirePermissionis applied to the batch endpoint ✅validateBatch()in the service layer ✅max-file-size: 100MBandmax-request-size: 500MB✅Observations (non-blocking)
CSRF on the client-side fetch:
BulkDocumentEditLayout.svelteissues afetch('/api/documents/batch', ...)directly from the browser. SvelteKit's built-in CSRF protection applies only to formactionsubmissions processed by the SvelteKit server. This raw fetch bypasses the SvelteKit CSRF layer and hits Spring Boot directly. Confirm thatSecurityConfighandles this correctly — the typical pattern for session-cookie-secured Spring APIs is to disable CSRF protection for/api/**explicitly (stateless REST withSameSite=StrictorLaxcookies handles CSRF by the browser). This is an existing pattern in the app, not new to this PR, but worth a confirmation comment inSecurityConfig.Metadata blob has no explicit size cap: The
metadataJSON part could theoretically contain a very largetitlesarray. The 500MB request cap provides coarse protection, but a dedicated metadata payload size guard (e.g. reject ifmetadatapart exceeds 1MB) would be more precise and fail faster.File type validation: Not visible in this diff — assumed to be inherited from the existing quick-upload path. If file type validation (MIME type / magic bytes check) is not present in the service, it should be added as a follow-up.
Blockers
None.
🧪 Sara Holt — QA & Test Engineer
Verdict: ✅ Approved
What was checked
Strong coverage ✅
BulkDocumentEditLayoutFileSwitcherStripScopeCardUploadSaveBarDocumentControllerTeststoreDocumentWithBatchMetadatabulkTitleFromFilenameGaps (suggestions for follow-up)
Network error path untested: No component test simulates
fetchthrowing (e.g.mockFetch.mockRejectedValue(new Error('network error'))). Ifsave()has no try/catch, an unhandled rejection leaves the UI in a stuck state with no error feedback. Worth adding a test and confirming the guard exists.Double-click save: No test verifies that clicking Save twice doesn't double-submit. See also Felix's implementation note.
Partial success within a chunk (
ok: true+ non-emptyerrorsarray): The backendQuickUploadResultallows a chunk to return HTTP 200 with errors. Component tests only cover theok: falsepath for error marking. A test with{ ok: true, json: () => ({ created: [], updated: [], errors: [{ filename: 'x.pdf', code: 'FILE_UPLOAD_FAILED' }] }) }would confirm this edge case is handled.E2E golden path missing: No Playwright test covers the end-to-end happy path (drop files → fill metadata → save → arrive at
/documents). This feature replaces the entire new-document flow — an E2E test is the only way to catch regressions that unit tests can't see.Blockers
None.
🎨 Leonie Voss — UI/UX & Accessibility Expert
Verdict: ⚠️ Approved with concerns
What was checked
Positives ✅
min-h-[44px]✅ (fixed in this session — was a blocker)h-[44px] w-[44px]✅sr-onlyerror label on error chips ✅ (fixed in this session — was a blocker)aria-live="polite"region announces active file name ✅aria-current="true"on active chip ✅<progress>hasaria-label,aria-valuenow,aria-valuemin,aria-valuemax✅Concerns
Text size is too small for 60+ users: The chip label is
text-[11px]and the number badge istext-[9px]. At standard laptop DPI, 9px is below the threshold where many older users can read comfortably. The target audience (laptop/tablet transcribers, many 60+) makes this a real usability risk. Suggesttext-xs(12px) for labels and at minimum 11px for badges. Not a hard WCAG violation, but it conflicts with the project's stated audience.Focus management after file removal: When a chip is removed, focus returns to
<body>. A user navigating by keyboard who removes a file is left completely disoriented. TheonRemovehandler should move focus to the adjacent chip (previous, falling back to next), or to the drop zone if all files were removed.Hidden scrollbar signals no overflow:
scrollbar-width:noneon the strip hides the horizontal scrollbar. A mouse-only user with 20 files has no visual indicator that more chips exist to the right. Consider faint gradient fade-out overlays at the strip edges to communicate overflow. Keyboard users are covered by ArrowRight, but pointer-only users are not.Destructive discard with no confirmation: "Alle verwerfen" has no safety step. For a 60+ user who has spent time filling metadata for 8 files, an accidental click or tap is unrecoverable. A brief confirmation dialog ("Alle Dateien verwerfen?" with Cancel/Bestätigen) or a 5-second undo toast is strongly recommended before this feature reaches users.
Blockers
None — the accessibility fixes from this session (sr-only label, touch targets) are appreciated and move the needle significantly.
All reviewer concerns addressed. Summary by concern:
Concurrent save guard (
@reviewer: "double-click fires two fetches")savingguard insave()+disabledprop toUploadSaveBar— test:save() does not call fetch a second time when already saving✅fix(bulk-upload): guard save() against concurrent invocations+fix(bulk-upload): thread saving guard through UploadSaveBar disabled propNetwork error handling (
@reviewer: "fetch() rejects on network failure — unhandled")save() marks all chunk files as errored when fetch throws a network error✅fix(bulk-upload): catch network errors and mark chunk files as erroredPartial success (HTTP 200 + errors array) (
@reviewer: "ok:true with errors[] is not handled")body.errorseven onok:true; only named files get error status — test:save() marks only the failed file when server returns HTTP 200 with a partial errors array✅fix(bulk-upload): handle partial success — mark only error-named files on ok:trueChip font size (
@reviewer: "text-[9px] / text-[11px] too small for 60+ users")text-[11px]and chip label totext-xs(12 px) — no test needed (visual) ✅fix(bulk-upload): raise chip label and badge font sizes for 60+ readabilityFocus management after chip removal (
@reviewer: "focus jumps to body after remove")handleRemove()captures target chip ID before removal, then focuses it viatick()— test:focus moves to the previous chip after the middle chip is removed✅test(bulk-upload): add focus-after-remove assertion for FileSwitcherStrip+fix(bulk-upload): move focus to adjacent chip after removalGradient overflow indicators (
@reviewer: "no visual cue that strip scrolls")pointer-events-nonegradient overlays at the track edges — test: existing layout tests still pass ✅feat(bulk-upload): add gradient fade overlays to FileSwitcherStrip trackDiscard confirm dialog (
@reviewer: "discard-all is instant — no confirmation")handleDiscard()callsgetConfirmService().confirm()(optional — null fallback when context absent, so existing tests need no changes); addedbulk_discard_confirmkey in de/en/es — test:discard-all does not clear files when the user cancels the confirm dialog✅test(bulk-upload): confirm dialog cancel keeps files (RED)+feat(bulk-upload): guard discard-all with confirm dialogConstant comments (
@reviewer: "magic numbers 10 and 50")chunkSize = 10(reverse-proxy body size) and the 50-file cap inDocumentService.validateBatch✅docs(bulk-upload): explain chunkSize=10 and 50-file cap constantsAll 13 plan items completed. Branch
feat/issue-317-bulk-uploadpushed.👨💻 Felix Brandt — Senior Fullstack Developer
Verdict: ⚠️ Approved with concerns
Solid feature. Clean separation across the stack, good use of Svelte 5 primitives, and test coverage is thorough. Two things need attention before this is fully clean.
Blockers
1.
autofocusremoval fromWhoWhenSectionis a shared-component regressionWhoWhenSection.svelteis used inBulkDocumentEditLayoutand in the existing single-document edit form (/documents/[id]/edit). This PR removesautofocusfrom the date input and the sender typeahead unconditionally. That change bleeds into the regular edit form UX.Fix: add an
autofocusprop defaulting totrue, and passautofocus={false}fromBulkDocumentEditLayout. The bulk layout shouldn't silently reshape the shared component.2. The
applyBatchMetadatare-fetch after tag update is an unexplained smellDocumentService.java:updateDocumentTagsmodifies thedocrelationship through the repository. Within the same@Transactionalboundary, the JPA entitydocshould already reflect those changes or be cache-invalidated automatically. The extrafindByIdis either (a) working around a JPA first-level cache issue, or (b) unnecessary. If (a) is the real reason, a comment explaining the JPA cache behaviour is required — otherwise a future reader removes it "to simplify" and reintroduces a bug.Suggestions
3.
BulkDocumentEditLayout.sveltesave() is 50 lines inside a 320-line fileThe
save()async function (lines ~120-175) could be extracted to asaveBulkUpload(entries, senderId, receivers, dateIso, tags, onProgress)utility in$lib/utils/or a+page-level helper. The layout component then orchestrates; the function tests in isolation. Not a blocker for a private app, but worth tracking.4. Fully-qualified
MockMultipartFilein controller testsDocumentControllerTest.javaimports are clean at the top, but the new test methods use fully-qualified names inline:The existing tests in the same file use a static import for
multipart. Addimport org.springframework.mock.web.MockMultipartFile;at the top to match the file's style.5.
_confirmService = getConfirmService()try/catch at module initThe silent catch for the test isolation context is pragmatic, but any reader who doesn't know the reason will flag this as swallowed-exception territory. One comment —
// getConfirmService() throws when the CONFIRM_KEY context is not provided (unit tests)— is worth it here.What's good
SvelteMap<string, FileEntry>for reactive file state — exactly rightuntrack()oninitialSenderId/initialReceiversto avoid reactive loops on prop change — correct and non-obvious, glad it's there$effectfor the keydown listener inFileSwitcherStrip— correct use; this is a side effect, not a derived valuehadErrorstracking — handles partial success correctlyURL.revokeObjectURLon remove +onDestroy— blob leak prevention is present🏗️ Markus Keller — Application Architect
Verdict: ✅ Approved
Architecture is clean. Monolith boundaries are respected, no new infrastructure, and the service layer correctly owns all domain logic. One production-readiness concern to validate before shipping.
Production-Readiness Concern (not a code blocker)
max-request-size: 500MBneeds Caddy validationapplication.yaml:This is the right change for the backend. But our production reverse proxy is Caddy. Caddy's default
request_body max_sizeis 0 (unlimited), but if the Caddyfile explicitly sets one — or if a future hardening pass adds one — large bulk uploads will receive Caddy's 413 before touching the backend.Action item: verify the production Caddyfile has no conflicting body size limit, or document it in
docs/infrastructure/. Not blocking this merge, but it should be tracked as a follow-up.Notes
Layer boundaries: clean
documentService.validateBatch()before delegating storage — correct. Validation belongs in the service, and the controller doesn't duplicate the logic.applyBatchMetadatacallspersonService.getById()— goes through the service, not the repository directly. Boundary respected.validateBatchispublicfor controller access — acceptableThe alternative would be packaging controller+service together (feature-package style), which the current codebase doesn't do yet. Given the existing layer structure,
publicis the right choice. A@VisibleForTestingor service-internal overload isn't needed here.Progressive enhancement trade-off is intentional and documented
The old
/documents/newused SvelteKit form actions (?/save,?/saveReviewed). The new layout uses rawfetchfor chunked FormData. The comment inBulkDocumentEditLayout.svelteexplains the reason:This is a deliberate and correctly-reasoned trade-off. The feature inherently requires JavaScript (file preview, drag-and-drop, per-chunk progress). No fallback is possible, and trying to add one would add complexity with no real benefit.
DocumentBatchMetadataDTO— appropriate DTO designNo
@Schema(requiredMode = REQUIRED)annotations needed — all fields are optional by design. The DTO correctly uses@Data(not@Builder/@Value) since it's a mutable deserialization target. Fine.Comment in
BulkDocumentEditLayout.sveltereferences nginx, not CaddyThe example is illustrative, but our stack uses Caddy. The comment's factual accuracy is secondary to its purpose (explaining the constant), but it could mislead during a future debugging session. Minor.
🔒 Nora "NullX" Steiner — Application Security Engineer
Verdict: ✅ Approved
No new attack surface introduced. Existing controls hold. One low-severity gap worth addressing in a follow-up.
What I Checked
Authentication / Authorization — clean
quickUploadretains@RequirePermission(Permission.WRITE_ALL). Themetadata@RequestPartis additive and optional; it doesn't create a new unauthenticated path.Injection — clean
senderId,receiverIdsareUUID-typed — invalid UUIDs fail deserialization before reaching service code.documentDateisLocalDate-typed — same protection.tagNames: List<String>flows intotagService.findOrCreate(tagName)which uses JPA — parameterized queries, not string concatenation.doc.setTitle(...)and persisted via JPA — safe.Logging — clean
SLF4J parameterized logging. No JNDI injection risk.
Frontend raw fetch — acceptable
Same-origin request. Session cookie is sent automatically. CSRF is structurally mitigated by the custom
Authorizationheader / cookie mechanism. No risk here.Low-Severity Gap (follow-up, not a blocker)
tagNameshas no length validation — CWE-20DocumentBatchMetadataDTO.tagNamesisList<String>with no per-element length constraint. A malicious request could send tag names of arbitrary length (e.g., 64KB strings). These reachtagService.findOrCreate(tagName)which hits the DB. The likely outcome is aDataTruncationException(tag name column has a length limit), but that exception surfaces as a 500 rather than a 400.Suggested fix in
DocumentService.validateBatch():This converts a potential 500 into a clean 400 and prevents unnecessary DB round-trips.
DoS Note (informational)
The
500MB max-request-sizesetting means a single POST can hold 500MB in-flight. For a family app behind authentication, this isn't exploitable by unauthenticated users. If this endpoint is ever exposed more broadly, rate-limiting at the Caddy layer (or Spring's rate-limit interceptor) would be appropriate. No action needed now.🧪 Sara Holt — QA Engineer & Test Strategist
Verdict: ⚠️ Approved with concerns
Coverage is strong and the test pyramid is well-structured. A few gaps that would catch real bugs in future refactors.
Test Pyramid Assessment
@WebMvcTest@ExtendWith(MockitoExtension.class)vitest-browser-svelteConcerns
1. Missing: confirm dialog path in discard-all test
BulkDocumentEditLayout.svelte.spec.tstests discard-all and verifies the N=0 state is reached. However,_confirmServiceisnullin the test environment (the catch at component init is documented). The test therefore exercises the no-confirmation path only.The confirm dialog is the primary production path for destructive actions. Consider adding a test that stubs
getConfirmServiceto return a mock withconfirmreturningfalse— verifying that the discard does NOT happen when the user cancels.2. Missing: boundary test for exactly 50 files in controller test
quickUpload_returns400_whenBatchExceedsCaptests 51 files (rejected). There's no controller test for 50 files (should pass the cap).validateBatch_doesNotThrow_whenFileCountEqualsCapExactlycovers this at the service layer but not at the controller boundary. Add:3.
WhoWhenSectionautofocus removal has no regression testautofocuswas removed from the date input and sender typeahead inWhoWhenSection.svelte. There's no test verifying the previous autofocus behaviour existed (to prove it was intentional to remove it) and no test documenting the new behaviour (no autofocus). If this was a deliberate UX decision, a comment in the component is enough. If it was accidental, there's no safety net.4.
storeDocumentWithBatchMetadatalacks integration test coverage for tag side effectsThe service unit test
storeDocumentWithBatchMetadata_appliesTagsViaUpdateDocumentTagsmockstagService.findOrCreateanddocumentRepository.findById. It doesn't test the actual JPA transaction behaviour whenupdateDocumentTagsruns within the same transaction assave. This is the exact scenario where an integration test against a real Postgres container would catch@Transactionalordering bugs. Not blocking, but worth a Testcontainers integration test in a follow-up.What's Good
validateBatch_doesNotThrow_whenFileCountEqualsCapExactly— boundary condition at N=50 covered at service layerstoreDocumentWithBatchMetadata_leavesTitle_whenIndexExceedsTitlesList— off-by-one edge case coveredsave marks only the file whose filename matches the backend error— precise error attribution test, well-namedsave() does not call fetch a second time when already saving— double-click guard tested correctlyafterEach(() => { cleanup(); vi.unstubAllGlobals(); vi.clearAllMocks(); })— clean test isolation🚀 Tobias Wendt — DevOps & Platform Engineer
Verdict: ⚠️ Approved with concerns
No new infrastructure, no Compose changes, no CI changes — all good. One operational item needs verification before this goes to production.
Blocker (must verify before production deploy)
max-request-size: 500MB— Caddy body size limit check requiredapplication.yaml:The backend now accepts 500MB bodies. Caddy by default has no body size limit, but if the production Caddyfile was hardened at any point with
request_body { max_size ... }, bulk uploads beyond that limit will return Caddy's 413 before the backend sees anything. The user gets no meaningful error, and it would be hard to diagnose without knowing this limit exists.Action: Check the production Caddyfile for
max_sizeorrequest_bodydirectives. If one exists, either raise it to at least 500MB or document the mismatch. If none exists, document that this setting is intentionally unlimited.Minor Note
Comment references nginx, but our proxy is Caddy
BulkDocumentEditLayout.svelte:The stack uses Caddy, not nginx. The 10-file chunk size is sound reasoning, but the example is misleading to anyone who looks at this when debugging a 413. Update to reference Caddy's
request_body max_sizeinstead.What Looks Good
file-size-threshold: 2KB— small files stay in memory, large PDFs stream to disk. Correct choice to prevent OOM under load.📋 Elicit — Requirements Engineer
Verdict: ⚠️ Approved with concerns
The PR description is dense and well-structured. The implementation matches the stated intent. Two requirements concerns: one unintended scope change, and one ambiguity in the destructive-action confirmation path.
Unintended Scope Regression — Existing Flow Affected
Issue:
WhoWhenSection.svelteautofocusremovalThe PR removes
autofocusfrom the date input and sender typeahead inWhoWhenSection.svelte. This component is shared between:/documents/new) — this was the original context where autofocus guided the user through the form sequentially/documents/[id]/edit) — same shared componentThe change silently modifies the behaviour of existing flows without a stated requirement to do so. If the autofocus behaviour in single-document forms was intentional and should be preserved, this is a regression. The issue tracker (#317) scopes this to bulk upload ��� the shared-component change is out of scope.
Requirement question: Was the removal of autofocus from existing single-document forms part of issue #317, or a side effect of using the shared component in the bulk layout?
Ambiguity — Destructive Action Confirmation
The "Alle verwerfen" (Discard All) action appears in two places:
isMultistate, triggershandleDiscard()with confirm dialoghandleDiscard()In unit tests,
_confirmServiceisnull, sohandleDiscard()discards immediately without confirmation. In production, the confirm dialog fires for both entry points.This is internally consistent, but the PR description and issue do not specify:
isMulti-gated topbar button intentionally redundant with the save bar button?The current implementation answers both questions implicitly (yes confirmation for N≥2 topbar, save bar always fires handleDiscard which includes confirmation), but the requirement is not explicit. For a family app this is a low-risk ambiguity. Noting it for the issue tracker.
What's Well-Specified
BATCH_TOO_LARGEerror code is specified, implemented, tested, and translated across all 3 localesbulk_save_cta_one/bulk_save_cta) is documented in the PR description with the Paraglide 2.5 constraintgoto('/documents')on completion — unambiguous success behaviour🎨 Leonie Voss — UX Designer & Accessibility Strategist
Verdict: ✅ Approved
Accessibility fundamentals are solid. The component set is well-thought-out for the target audience (transcribers on laptop/tablet). One shared-component regression to confirm, and two follow-up notes.
Strong Accessibility Work
FileSwitcherStrip.sveltearia-live="polite"+aria-atomic="true"in an.sr-onlydiv announces the active file name to screen readers when it changes — this is excellent. Most developers miss this.tabindexmanipulation, which also works fine here).44px × 44px(viah-[44px] w-[44px]) — meets WCAG 2.2 minimum, good for the 60+ audience.h-[44px] w-[44px]— good.BulkDropZone.svelterole="region"+aria-label+aria-describedby— landmark with accessible name and description. Correct.<label>— accessible CTA button.sr-onlyon the input, visible label text. Correct pattern.accept="application/pdf"— communicates the constraint to assistive technology.UploadSaveBar.svelte<progress>witharia-label,aria-valuenow,aria-valuemin,aria-valuemax— fully accessible progress indicator.min-h-[44px]— meets minimum touch target.min-h-[44px]— good.Concern —
autofocusRemoval Affects Existing FormsWhoWhenSection.sveltehadautofocuson the date input (when no initial date) and on the sender typeahead (when a date was pre-filled). These were removed in this PR. The removal is correct for the bulk layout — autofocus in a split-panel form with multiple fields is disruptive.However,
WhoWhenSectionis shared with the existing single-document create and edit forms, where autofocus guided new users through the form sequentially. If removing autofocus from those forms was unintentional, it's a UX regression for the primary digitisation flow.If intentional, no change needed — just confirming the decision was conscious.
Follow-up Notes (not blocking)
1.
fixedlayout on mobile with virtual keyboardBulkDocumentEditLayout.svelteuses:When the virtual keyboard opens on a tablet (which the transcriber persona uses),
fixedelements can behave unexpectedly — the keyboard may not push the layout, potentially obscuring the save bar or the active input. This is a known iOS Safari issue withfixed+ keyboard. Since transcribers are the primary user, and they're on laptop/tablet rather than phone, this is lower priority, but worth a manual test on iPad Safari.2. Navigation arrows use text characters (‹/›)
These are typographic arrows rendered as text. On some fonts/platforms they render poorly or inconsistently. A 12px SVG chevron (same as the back arrow in the topbar) would be more consistent. Cosmetic.
3. Title input
<label>wrapping is accessible — confirmedBoth the per-file and single-file title inputs use implicit label association (input inside
<label>). Nofor/idneeded when the input is a direct child of the label. Pattern is correct.