feat(dashboard): redesign Dokumente dashboard as a document hub (Variant A) #271

Closed
opened 2026-04-18 19:26:50 +02:00 by marcel · 19 comments
Owner

Problem

The current homepage (/) is a wall of lists — nothing invites collaboration, nothing celebrates group progress, and there is no clear primary action. It feels institutional, not familial.

Solution

Redesign the Dokumente dashboard as an action-led document hub (Variant A from the design exploration):

  • Greeting + contextual subtitle — personalised, activity-aware ("Klaus hat dich erwähnt…")
  • Hero "Weiter, wo du aufgehört hast" — large card with letter thumbnail (only place large enough to be legible), archival caption, pull-quote excerpt, per-letter progress bar, "Weitertranskribieren" CTA
  • Mission Control 3-up — Segmentieren / Transkribieren / Prüfen columns, task rows with metadata + avatar of who started each item; no thumbnails at small sizes (kurrent is unreadable noise)
  • Family Pulse sidebar card — "Diese Woche · gemeinsam" with three big numbers (transkribiert / geprüft / hochgeladen); no cumulative 1,500 total (demotivating at early state)
  • Kommentare & Aktivität feed — replaces "Zuletzt hinzugefügt"; "für dich" badge on @mentions and replies
  • Upload dropzone — secondary sidebar card, unchanged

Design spec

docs/specs/dokumente-dashboard-spec.html — committed in the same issue.

## Problem The current homepage (`/`) is a wall of lists — nothing invites collaboration, nothing celebrates group progress, and there is no clear primary action. It feels institutional, not familial. ## Solution Redesign the Dokumente dashboard as an action-led document hub (Variant A from the design exploration): - **Greeting + contextual subtitle** — personalised, activity-aware ("Klaus hat dich erwähnt…") - **Hero "Weiter, wo du aufgehört hast"** — large card with letter thumbnail (only place large enough to be legible), archival caption, pull-quote excerpt, per-letter progress bar, "Weitertranskribieren" CTA - **Mission Control 3-up** — Segmentieren / Transkribieren / Prüfen columns, task rows with metadata + avatar of who started each item; no thumbnails at small sizes (kurrent is unreadable noise) - **Family Pulse sidebar card** — "Diese Woche · gemeinsam" with three big numbers (transkribiert / geprüft / hochgeladen); no cumulative 1,500 total (demotivating at early state) - **Kommentare & Aktivität feed** — replaces "Zuletzt hinzugefügt"; "für dich" badge on @mentions and replies - **Upload dropzone** — secondary sidebar card, unchanged ## Design spec `docs/specs/dokumente-dashboard-spec.html` — committed in the same issue.
marcel added the featureui labels 2026-04-18 19:26:56 +02:00
Author
Owner

Design spec committed: 40db469docs/specs/dokumente-dashboard-spec.html

Sections covered:

  • §1 Full page overview (1280px, ~55% scale)
  • §2 Header + search bar
  • §3 Greeting + hero resume card
  • §4 Mission control 3-up (Bereit, Zu prüfen, Zu bearbeiten)
  • §5 Family Pulse sidebar (week/month/all-time toggle)
  • §6 Activity feed + dropzone
  • §7 Mobile (320px)
  • §8 Implementation notes — 30+ i18n keys, 3 new backend endpoints, 10-step sequence

Ready to implement.

Design spec committed: `40db469` → `docs/specs/dokumente-dashboard-spec.html` **Sections covered:** - §1 Full page overview (1280px, ~55% scale) - §2 Header + search bar - §3 Greeting + hero resume card - §4 Mission control 3-up (Bereit, Zu prüfen, Zu bearbeiten) - §5 Family Pulse sidebar (week/month/all-time toggle) - §6 Activity feed + dropzone - §7 Mobile (320px) - §8 Implementation notes — 30+ i18n keys, 3 new backend endpoints, 10-step sequence Ready to implement.
Author
Owner

🏛️ Markus Keller — Senior Application Architect

Observations

  • Critical data gap: status change history. GET /api/dashboard/pulse?period=week needs counts of documents transcribed / reviewed / uploaded this week. The current Document model stores only the current status — not when it changed. createdAt covers uploads, but transitions to TRANSCRIBED / REVIEWED have no timestamp. This blocks the Pulse endpoint entirely.

  • GET /api/dashboard/resume is derivable without a new table. Query the most recently updated TranscriptionBlock where updatedBy = currentUser and the parent document has status UPLOADED. TranscriptionBlock.updatedBy and updatedAt already exist — no migration needed for this endpoint.

  • PDF thumbnail approach: generate-once is architecturally correct. Serving thumbnails on-the-fly per dashboard page load means one CPU-bound PDF render per concurrent user viewing the dashboard. That scales linearly and badly. The right design: async thumbnail generation on upload, stored in MinIO as thumbnails/{documentId}_p1.jpg, served via presigned URL. The @Async pattern already used by FileService and the OCR pipeline applies directly.

  • AppUser has no color field. The spec references person.color in 4 separate impl-ref tables (hero collaborator stack, mission control starter avatars, pulse contributor avatars, activity feed avatars). This field is load-bearing — avatars will have no background colour without it. It must be added before any frontend component work begins.

  • Existing endpoint overlap. GET /api/documents/recent-activity partially overlaps the new GET /api/dashboard/activity. These must not coexist — delete the old endpoint in the same PR to prevent confusion and double-maintenance.

  • DashboardController is a cross-cutting concern. It aggregates across documents, persons, and comments — it should live in its own package, not inside document/. Inject DocumentService, PersonService, and a thin ActivityService that wraps comment + status-change queries.

Recommendations

  • Add Flyway V46 migration adding status_changed_at TIMESTAMPTZ NOT NULL DEFAULT now() to documents. Backfill with updated_at. Going forward, DocumentService sets this column whenever document.status changes. This gives the Pulse endpoint "changed this week" data without event sourcing.

  • Add thumbnailPath String to Document entity. Populate asynchronously via a new ThumbnailService that calls the OCR service (POST /ocr/thumbnail — see Tobias's review) after upload completes. The presigned-URL logic in FileService.getPresignedUrl() already handles authenticated serving.

  • Add color String to AppUser with @Builder.Default assigned deterministically: PALETTE[Math.abs(userId.hashCode()) % PALETTE.length] from a fixed 8-colour array matching the spec's colours. No user input required; consistent across sessions.

  • Delete /api/documents/incomplete and /api/documents/recent-activity in the same PR as the new dashboard endpoints. These are superseded and will cause confusion if left in.

  • Do not implement the Pulse endpoint before V46 is in. Using updated_at as a fallback produces wrong numbers because updated_at changes on any metadata edit, not just status transitions.

## 🏛️ Markus Keller — Senior Application Architect ### Observations - **Critical data gap: status change history.** `GET /api/dashboard/pulse?period=week` needs counts of documents *transcribed / reviewed / uploaded this week*. The current `Document` model stores only the current status — not when it changed. `createdAt` covers uploads, but transitions to `TRANSCRIBED` / `REVIEWED` have no timestamp. This blocks the Pulse endpoint entirely. - **`GET /api/dashboard/resume` is derivable without a new table.** Query the most recently updated `TranscriptionBlock` where `updatedBy = currentUser` and the parent document has status `UPLOADED`. `TranscriptionBlock.updatedBy` and `updatedAt` already exist — no migration needed for this endpoint. - **PDF thumbnail approach: generate-once is architecturally correct.** Serving thumbnails on-the-fly per dashboard page load means one CPU-bound PDF render per concurrent user viewing the dashboard. That scales linearly and badly. The right design: async thumbnail generation on upload, stored in MinIO as `thumbnails/{documentId}_p1.jpg`, served via presigned URL. The `@Async` pattern already used by `FileService` and the OCR pipeline applies directly. - **`AppUser` has no `color` field.** The spec references `person.color` in 4 separate impl-ref tables (hero collaborator stack, mission control starter avatars, pulse contributor avatars, activity feed avatars). This field is load-bearing — avatars will have no background colour without it. It must be added before any frontend component work begins. - **Existing endpoint overlap.** `GET /api/documents/recent-activity` partially overlaps the new `GET /api/dashboard/activity`. These must not coexist — delete the old endpoint in the same PR to prevent confusion and double-maintenance. - **`DashboardController` is a cross-cutting concern.** It aggregates across documents, persons, and comments — it should live in its own package, not inside `document/`. Inject `DocumentService`, `PersonService`, and a thin `ActivityService` that wraps comment + status-change queries. ### Recommendations - **Add Flyway `V46` migration** adding `status_changed_at TIMESTAMPTZ NOT NULL DEFAULT now()` to `documents`. Backfill with `updated_at`. Going forward, `DocumentService` sets this column whenever `document.status` changes. This gives the Pulse endpoint "changed this week" data without event sourcing. - **Add `thumbnailPath String` to `Document` entity.** Populate asynchronously via a new `ThumbnailService` that calls the OCR service (`POST /ocr/thumbnail` — see Tobias's review) after upload completes. The presigned-URL logic in `FileService.getPresignedUrl()` already handles authenticated serving. - **Add `color String` to `AppUser`** with `@Builder.Default` assigned deterministically: `PALETTE[Math.abs(userId.hashCode()) % PALETTE.length]` from a fixed 8-colour array matching the spec's colours. No user input required; consistent across sessions. - **Delete `/api/documents/incomplete` and `/api/documents/recent-activity`** in the same PR as the new dashboard endpoints. These are superseded and will cause confusion if left in. - **Do not implement the Pulse endpoint before V46 is in.** Using `updated_at` as a fallback produces wrong numbers because `updated_at` changes on any metadata edit, not just status transitions.
Author
Owner

👨‍💻 Felix Brandt — Senior Fullstack Developer

Observations

  • Component decomposition in the spec is correct. Six clearly-named visual regions → six component files: DashboardResumeStrip.svelte, MissionControlStrip.svelte, DashboardFamilyPulse.svelte, DashboardActivityFeed.svelte, plus changes to +page.svelte and +layout.svelte. Each maps to exactly one nameable area.

  • TranscriptionQueueService already exists and delivers the three queue lists (Segmentierung / Transkription / Lesefertig). The spec's suggestion to use raw GET /api/documents?status=PLACEHOLDER&limit=5 queries bypasses this. Route through DashboardControllerTranscriptionQueueService instead — the logic is already tested.

  • AppUser has no color field. Every avatar in this spec uses it. This field must exist before MissionControlStrip, DashboardFamilyPulse, and DashboardActivityFeed can render correctly. It's a prerequisite, not a nice-to-have.

  • Pull-quote for the hero card is derivable. TranscriptionBlock has sortOrder for ordering and text for content. First 200 chars of the first block (by sortOrder) gives the excerpt. No new field needed.

  • Progress percentage is derivable. TranscriptionQueueProjection already provides annotationCount and reviewedBlockCount via native SQL. pct = reviewedBlockCount / annotationCount. Use this projection in the resume endpoint response rather than adding a denormalized field.

  • $derived applies throughout. Every computed value in the new components (progressPct, isForYou, collaboratorNames, greeting time-of-day) should be $derived, not $state + $effect.

Recommendations

  • Write DashboardControllerTest first@WebMvcTest(DashboardController.class) with three test methods: resume_returns_null_when_no_in_progress, pulse_returns_week_stats, activity_returns_activity_list. Run them red before creating the controller class.

  • Add color: String to AppUser with @Builder.Default from a deterministic palette hash. Include @Schema(requiredMode = REQUIRED) so it propagates to the TypeScript types after npm run generate:api. Without this, the TypeScript AppUser type will have color?: string and every avatar will need a null-check fallback.

  • Component test for DashboardResumeStrip: two cases — empty state renders m.dashboard_empty_title() heading; loaded state renders document.title in an <h2> and a role="progressbar" with correct aria-valuenow.

  • Do not use {#each queue as doc} without a key. All three mission control queues must be keyed: {#each queue as doc (doc.id)}.

  • The greeting subtitle contains a document title (user-controlled content). Use {subtitle} interpolation, never {@html subtitle}. This is already safe in Svelte by default — just flag it explicitly in the component so it's not "helpfully" changed to @html later.

## 👨‍💻 Felix Brandt — Senior Fullstack Developer ### Observations - **Component decomposition in the spec is correct.** Six clearly-named visual regions → six component files: `DashboardResumeStrip.svelte`, `MissionControlStrip.svelte`, `DashboardFamilyPulse.svelte`, `DashboardActivityFeed.svelte`, plus changes to `+page.svelte` and `+layout.svelte`. Each maps to exactly one nameable area. - **`TranscriptionQueueService` already exists** and delivers the three queue lists (Segmentierung / Transkription / Lesefertig). The spec's suggestion to use raw `GET /api/documents?status=PLACEHOLDER&limit=5` queries bypasses this. Route through `DashboardController` → `TranscriptionQueueService` instead — the logic is already tested. - **`AppUser` has no `color` field.** Every avatar in this spec uses it. This field must exist before `MissionControlStrip`, `DashboardFamilyPulse`, and `DashboardActivityFeed` can render correctly. It's a prerequisite, not a nice-to-have. - **Pull-quote for the hero card is derivable.** `TranscriptionBlock` has `sortOrder` for ordering and `text` for content. First 200 chars of the first block (by `sortOrder`) gives the excerpt. No new field needed. - **Progress percentage is derivable.** `TranscriptionQueueProjection` already provides `annotationCount` and `reviewedBlockCount` via native SQL. `pct = reviewedBlockCount / annotationCount`. Use this projection in the resume endpoint response rather than adding a denormalized field. - **`$derived` applies throughout.** Every computed value in the new components (`progressPct`, `isForYou`, `collaboratorNames`, greeting time-of-day) should be `$derived`, not `$state` + `$effect`. ### Recommendations - **Write `DashboardControllerTest` first** — `@WebMvcTest(DashboardController.class)` with three test methods: `resume_returns_null_when_no_in_progress`, `pulse_returns_week_stats`, `activity_returns_activity_list`. Run them red before creating the controller class. - **Add `color: String` to `AppUser`** with `@Builder.Default` from a deterministic palette hash. Include `@Schema(requiredMode = REQUIRED)` so it propagates to the TypeScript types after `npm run generate:api`. Without this, the TypeScript `AppUser` type will have `color?: string` and every avatar will need a null-check fallback. - **Component test for `DashboardResumeStrip`:** two cases — empty state renders `m.dashboard_empty_title()` heading; loaded state renders `document.title` in an `<h2>` and a `role="progressbar"` with correct `aria-valuenow`. - **Do not use `{#each queue as doc}` without a key.** All three mission control queues must be keyed: `{#each queue as doc (doc.id)}`. - **The greeting subtitle contains a document title** (user-controlled content). Use `{subtitle}` interpolation, never `{@html subtitle}`. This is already safe in Svelte by default — just flag it explicitly in the component so it's not "helpfully" changed to `@html` later.
Author
Owner

🔒 Nora "NullX" Steiner — Application Security Engineer

Observations

  • All three new dashboard endpoints expose aggregated data to authenticated users. This is fine — it's the same audience as the rest of the API. The risk is forgetting to add the permission annotation on a new controller. Annotate the class, not individual methods.

  • Thumbnail serving path needs care. If thumbnailPath is returned as a raw MinIO path in the API response and the frontend constructs a direct MinIO URL, that URL is unauthenticated. Use FileService.getPresignedUrl(thumbnailPath) to generate a short-TTL signed URL server-side, exactly as document downloads currently work. Do not add a new unauthenticated /api/documents/{id}/thumbnail endpoint.

  • youMentioned flag in the activity feed. This is computed by checking DocumentComment.mentions for the current user's ID. The same logic already exists in NotificationController (SSE stream). Do not re-implement it independently — call the same service method or extract a shared mentionsCurrentUser(comment, userId) helper. Two independent implementations of "does this mention me?" will diverge.

  • Activity feed reveals who is working on what. For a family archive with READ_ALL gating, this is acceptable — all authenticated users can see all documents already. No additional scoping is needed. Document this reasoning in a comment in DashboardController so a future reviewer doesn't "fix" it by adding per-user filtering.

  • Greeting subtitle renders user-controlled content (document title: "Klaus hat dich erwähnt in 'Brief an Frieda'"). Svelte's {variable} syntax escapes HTML automatically — no XSS risk. Would only become a risk if someone changes it to {@html ...}, which they shouldn't. Confirmed safe.

  • No new CSRF surface. All three endpoints are GET — they read, not write. No CSRF consideration needed.

Recommendations

  • Annotate DashboardController at class level:

    @RestController
    @RequestMapping("/api/dashboard")
    @RequirePermission(Permission.READ_ALL)
    public class DashboardController { ... }
    

    Class-level annotation covers all three methods. No risk of accidentally leaving one unprotected.

  • Return presigned thumbnail URL in GET /api/dashboard/resume response, not the raw thumbnailPath. The DTO should include thumbnailUrl: String (presigned, 15-minute TTL), not the internal S3 key. This is the same pattern as FileService.getPresignedUrl() used elsewhere.

  • Add security tests to DashboardControllerTest:

    @Test void resume_returns_401_when_unauthenticated()
    @Test void pulse_returns_403_when_user_has_no_permissions()
    

    One per endpoint variant. Standard table stakes for any new controller.

  • Do not expose TranscriptionBlock.createdBy or updatedBy UUIDs in the activity feed DTO. The spec needs Person { name, initials, color } — strip internal IDs at the DTO projection layer before they reach the API response.

## 🔒 Nora "NullX" Steiner — Application Security Engineer ### Observations - **All three new dashboard endpoints expose aggregated data to authenticated users.** This is fine — it's the same audience as the rest of the API. The risk is forgetting to add the permission annotation on a new controller. Annotate the class, not individual methods. - **Thumbnail serving path needs care.** If `thumbnailPath` is returned as a raw MinIO path in the API response and the frontend constructs a direct MinIO URL, that URL is unauthenticated. Use `FileService.getPresignedUrl(thumbnailPath)` to generate a short-TTL signed URL server-side, exactly as document downloads currently work. Do not add a new unauthenticated `/api/documents/{id}/thumbnail` endpoint. - **`youMentioned` flag in the activity feed.** This is computed by checking `DocumentComment.mentions` for the current user's ID. The same logic already exists in `NotificationController` (SSE stream). Do not re-implement it independently — call the same service method or extract a shared `mentionsCurrentUser(comment, userId)` helper. Two independent implementations of "does this mention me?" will diverge. - **Activity feed reveals who is working on what.** For a family archive with `READ_ALL` gating, this is acceptable — all authenticated users can see all documents already. No additional scoping is needed. Document this reasoning in a comment in `DashboardController` so a future reviewer doesn't "fix" it by adding per-user filtering. - **Greeting subtitle renders user-controlled content** (document title: `"Klaus hat dich erwähnt in 'Brief an Frieda'"`). Svelte's `{variable}` syntax escapes HTML automatically — no XSS risk. Would only become a risk if someone changes it to `{@html ...}`, which they shouldn't. Confirmed safe. - **No new CSRF surface.** All three endpoints are `GET` — they read, not write. No CSRF consideration needed. ### Recommendations - **Annotate `DashboardController` at class level:** ```java @RestController @RequestMapping("/api/dashboard") @RequirePermission(Permission.READ_ALL) public class DashboardController { ... } ``` Class-level annotation covers all three methods. No risk of accidentally leaving one unprotected. - **Return presigned thumbnail URL in `GET /api/dashboard/resume` response, not the raw `thumbnailPath`.** The DTO should include `thumbnailUrl: String` (presigned, 15-minute TTL), not the internal S3 key. This is the same pattern as `FileService.getPresignedUrl()` used elsewhere. - **Add security tests to `DashboardControllerTest`:** ```java @Test void resume_returns_401_when_unauthenticated() @Test void pulse_returns_403_when_user_has_no_permissions() ``` One per endpoint variant. Standard table stakes for any new controller. - **Do not expose `TranscriptionBlock.createdBy` or `updatedBy` UUIDs in the activity feed DTO.** The spec needs `Person { name, initials, color }` — strip internal IDs at the DTO projection layer before they reach the API response.
Author
Owner

🧪 Sara Holt — Senior QA Engineer

Observations

  • The spec's 10-step implementation sequence maps directly to test-driven delivery. Each step produces something independently testable: backend endpoints first, then TypeScript types, then components. This is the right order.

  • Family Pulse stats require status_changed_at to be testable. Without Markus's V46 migration, any Pulse integration test that asserts "transcribed this week = 3" would be testing against updated_at, which changes on metadata edits. The test would be unreliable. This migration is a prerequisite for writing meaningful Pulse tests.

  • GET /api/dashboard/resume has three distinct cases that each need a test before implementation:

    1. User has in-progress blocks → returns most recently updated document
    2. User has no in-progress blocks → returns null (empty state)
    3. User has in-progress blocks on multiple documents → returns the most recently touched one
  • Activity feed deduplication behaviour is unspecified. If a user transcribes the same document 3 times in one hour, does the feed show 3 entries or 1? The spec says "recent archive-wide activity, newest first" with limit=7 — but doesn't address collapse/dedup. This is a genuine open decision (flagged below). It must be decided before writing the feed tests.

  • Mobile layout needs visual regression coverage. §7 of the spec includes a 320px mobile layout. Without tests at this breakpoint, layout regressions will slip through undetected on subsequent PRs.

  • color field on AppUser must be present before component tests for any avatar-rendering component can assert correct visual output.

Recommendations

  • Test plan for DashboardController unit layer (@ExtendWith(MockitoExtension.class)):

    • DashboardResumeServiceTest: 3 cases above
    • DashboardPulseServiceTest: current-week counts correct, last-week documents excluded
    • DashboardActivityServiceTest: returns newest first, youMentioned set correctly for current user
  • Integration test for Pulse stats using Testcontainers + real PostgreSQL 16: create two documents with status_changed_at set to Monday of current week and one set to last Sunday; assert transcribed = 2, not 3.

  • Vitest component tests:

    • DashboardResumeStrip: renders empty state when resume === null; renders role="progressbar" with correct aria-valuenow when loaded
    • DashboardActivityFeed: renders "für dich" badge when activity.youMentioned === true; does not render badge when false
  • Playwright E2E: one smoke test at 1280px (dashboard loads, hero card visible, 3 mission columns visible), one at 320px (single-column stacked layout, no horizontal overflow). Total: 2 tests. Permutation coverage stays at the integration layer.

  • Add @Test void pulse_counts_only_current_week_status_changes() as the first test written — before any Pulse SQL is written. It documents the exact boundary condition (status_changed_at >= start_of_current_iso_week) that the query must satisfy.

## 🧪 Sara Holt — Senior QA Engineer ### Observations - **The spec's 10-step implementation sequence maps directly to test-driven delivery.** Each step produces something independently testable: backend endpoints first, then TypeScript types, then components. This is the right order. - **Family Pulse stats require `status_changed_at` to be testable.** Without Markus's V46 migration, any Pulse integration test that asserts "transcribed this week = 3" would be testing against `updated_at`, which changes on metadata edits. The test would be unreliable. This migration is a prerequisite for writing meaningful Pulse tests. - **`GET /api/dashboard/resume` has three distinct cases** that each need a test before implementation: 1. User has in-progress blocks → returns most recently updated document 2. User has no in-progress blocks → returns null (empty state) 3. User has in-progress blocks on multiple documents → returns the most recently touched one - **Activity feed deduplication behaviour is unspecified.** If a user transcribes the same document 3 times in one hour, does the feed show 3 entries or 1? The spec says "recent archive-wide activity, newest first" with `limit=7` — but doesn't address collapse/dedup. This is a genuine open decision (flagged below). It must be decided before writing the feed tests. - **Mobile layout needs visual regression coverage.** §7 of the spec includes a 320px mobile layout. Without tests at this breakpoint, layout regressions will slip through undetected on subsequent PRs. - **`color` field on `AppUser` must be present** before component tests for any avatar-rendering component can assert correct visual output. ### Recommendations - **Test plan for `DashboardController` unit layer** (`@ExtendWith(MockitoExtension.class)`): - `DashboardResumeServiceTest`: 3 cases above - `DashboardPulseServiceTest`: current-week counts correct, last-week documents excluded - `DashboardActivityServiceTest`: returns newest first, `youMentioned` set correctly for current user - **Integration test for Pulse stats** using Testcontainers + real PostgreSQL 16: create two documents with `status_changed_at` set to Monday of current week and one set to last Sunday; assert `transcribed = 2`, not `3`. - **Vitest component tests:** - `DashboardResumeStrip`: renders empty state when `resume === null`; renders `role="progressbar"` with correct `aria-valuenow` when loaded - `DashboardActivityFeed`: renders "für dich" badge when `activity.youMentioned === true`; does not render badge when `false` - **Playwright E2E:** one smoke test at 1280px (dashboard loads, hero card visible, 3 mission columns visible), one at 320px (single-column stacked layout, no horizontal overflow). Total: 2 tests. Permutation coverage stays at the integration layer. - **Add `@Test void pulse_counts_only_current_week_status_changes()` as the first test written** — before any Pulse SQL is written. It documents the exact boundary condition (`status_changed_at >= start_of_current_iso_week`) that the query must satisfy.
Author
Owner

🎨 Leonie Voss — UX Design Lead & Accessibility Strategist

Observations

  • The spec itself is well-structured. Two-layer format (scaled mockup + impl-ref table with real px values), explicit decision log, scale disclaimer. This is the correct template for this project. No changes needed to the spec format.

  • AppUser.color is missing and visually load-bearing. Avatars appear in 4 distinct sections of the dashboard: hero collaborator stack, mission control row starters, pulse contributor stack, activity feed actors. Without this field, all avatars render as colorless circles. This is the single highest-priority data gap.

  • Mission control task row touch targets fall short. The impl-ref specifies py-2.5 (10px vertical padding). At 16px font, that gives ≈36px total row height — below the WCAG 2.2 minimum of 44px for interactive elements. This is especially important given the 60+ audience.

  • Progress bar lacks ARIA semantics. The hero card progress bar is a <div> with a fill width. Without role="progressbar" and aria-valuenow / aria-valuemax, screen readers announce nothing about transcription progress.

  • Landmark structure is not specified. The spec mockup shows distinct page sections but doesn't call out HTML landmark roles. The mission control strip needs <section aria-label="..."> with a visible heading to be navigable by screen reader users.

  • Empty mission control avatar has a title attribute noted in impl-ref ("noch niemand angefangen"). This is correct — it provides the accessible name for keyboard and screen reader users. Confirm this is implemented as title, not just a visual tooltip via CSS.

  • Font sizes throughout comply with the senior audience constraint. Greeting headline at 32px, pull-quote at 17px, feed text at 15px, body at 16px. No violations.

  • The spec documents a 320px mobile layout in §7. The grid collapses to single column. Confirmed via grid-cols-[1fr_320px] with responsive override. Sidebar stacks below main content on mobile — this is the correct order for reading flow.

Recommendations

  • Fix mission control row touch targets. Change py-2.5 to py-3.5 (14px vertical padding). Total: 16px font + 28px padding = 44px. Exact WCAG 2.2 floor. Add min-h-[44px] flex items-center as belt-and-suspenders.

  • Add ARIA to progress bar:

    <div
      role="progressbar"
      aria-valuenow={pct}
      aria-valuemin={0}
      aria-valuemax={100}
      aria-label={m.transcription_progress()}
      class="w-full h-1.5 bg-[#eeede8] rounded-full overflow-hidden mb-5"
    >
    
  • Add landmark structure to +page.svelte:

    <main>
      <!-- greeting, search, hero -->
      <section aria-label={m.dashboard_mission_caption()}>
        <MissionControlStrip />
      </section>
    </main>
    
  • color field: recommend a fixed 8-colour curated palette matching the spec exactly: ['#7a4f9a', '#5a8a6a', '#3060b0', '#a0522d', '#c0446e', '#c17a00', '#0e7490', '#1d4ed8']. Assign via hashCode(userId) % 8. These are the exact colours shown in the spec's avatar examples — consistent with the existing visual language.

  • Run axe-core on the full dashboard page in the Playwright E2E suite. The multiple interactive sections and new ARIA roles make this a high-value automated check target.

## 🎨 Leonie Voss — UX Design Lead & Accessibility Strategist ### Observations - **The spec itself is well-structured.** Two-layer format (scaled mockup + impl-ref table with real px values), explicit decision log, scale disclaimer. This is the correct template for this project. No changes needed to the spec format. - **`AppUser.color` is missing and visually load-bearing.** Avatars appear in 4 distinct sections of the dashboard: hero collaborator stack, mission control row starters, pulse contributor stack, activity feed actors. Without this field, all avatars render as colorless circles. This is the single highest-priority data gap. - **Mission control task row touch targets fall short.** The impl-ref specifies `py-2.5` (10px vertical padding). At 16px font, that gives ≈36px total row height — below the WCAG 2.2 minimum of 44px for interactive elements. This is especially important given the 60+ audience. - **Progress bar lacks ARIA semantics.** The hero card progress bar is a `<div>` with a fill width. Without `role="progressbar"` and `aria-valuenow` / `aria-valuemax`, screen readers announce nothing about transcription progress. - **Landmark structure is not specified.** The spec mockup shows distinct page sections but doesn't call out HTML landmark roles. The mission control strip needs `<section aria-label="...">` with a visible heading to be navigable by screen reader users. - **Empty mission control avatar** has a `title` attribute noted in impl-ref ("noch niemand angefangen"). This is correct — it provides the accessible name for keyboard and screen reader users. Confirm this is implemented as `title`, not just a visual tooltip via CSS. - **Font sizes throughout comply with the senior audience constraint.** Greeting headline at 32px, pull-quote at 17px, feed text at 15px, body at 16px. No violations. - **The spec documents a 320px mobile layout in §7.** The grid collapses to single column. Confirmed via `grid-cols-[1fr_320px]` with responsive override. Sidebar stacks below main content on mobile — this is the correct order for reading flow. ### Recommendations - **Fix mission control row touch targets.** Change `py-2.5` to `py-3.5` (14px vertical padding). Total: 16px font + 28px padding = 44px. Exact WCAG 2.2 floor. Add `min-h-[44px] flex items-center` as belt-and-suspenders. - **Add ARIA to progress bar:** ```svelte <div role="progressbar" aria-valuenow={pct} aria-valuemin={0} aria-valuemax={100} aria-label={m.transcription_progress()} class="w-full h-1.5 bg-[#eeede8] rounded-full overflow-hidden mb-5" > ``` - **Add landmark structure to `+page.svelte`:** ```svelte <main> <!-- greeting, search, hero --> <section aria-label={m.dashboard_mission_caption()}> <MissionControlStrip /> </section> </main> ``` - **`color` field:** recommend a fixed 8-colour curated palette matching the spec exactly: `['#7a4f9a', '#5a8a6a', '#3060b0', '#a0522d', '#c0446e', '#c17a00', '#0e7490', '#1d4ed8']`. Assign via `hashCode(userId) % 8`. These are the exact colours shown in the spec's avatar examples — consistent with the existing visual language. - **Run axe-core on the full dashboard page** in the Playwright E2E suite. The multiple interactive sections and new ARIA roles make this a high-value automated check target.
Author
Owner

⚙️ Tobias Wendt — DevOps & Platform Engineer

Observations

  • Thumbnail generation is the only infrastructure question here. Everything else (3 new API endpoints, 6 new Svelte components) runs on the existing stack with zero config changes.

  • Generate-once is the right approach. On-the-fly thumbnail rendering means one PDF-to-JPEG conversion per dashboard viewer. The OCR service already loads Surya/Kraken models at startup — adding a lightweight thumbnail render to that container is a 3-line change, but making it synchronous per-request is wrong. A concurrent-users spike would cause memory contention with the loaded OCR models.

  • The OCR service already has everything needed. It imports pdf2image and Pillow (both used in the existing OCR pipeline). A POST /ocr/thumbnail endpoint that extracts page 1 as a 300×400px JPEG is a small addition — reusing _download_and_convert_pdf() which already handles MinIO fetching. Tobias checked: this function exists at line ~80 of main.py.

  • Storage overhead is negligible. A 180×246px JPEG thumbnail is 15-30KB. 10,000 documents = 150-300MB. At Hetzner Object Storage pricing (€0.02/GB), that's under €1/year. No concern.

  • No new volumes, no new services, no new compose entries needed. The thumbnail path is just another object in the existing archive-documents MinIO bucket. No structural changes to docker-compose.yml.

  • One new async flow to document: FileService.uploadDocument() → fires @Async ThumbnailService.generateThumbnail(documentId) → calls OCR service POST /ocr/thumbnail → stores result in MinIO → sets document.thumbnailPath. This is the same pattern as the existing OCR trigger. Document it in a comment in ThumbnailService so the next person understands the async boundary.

Recommendations

  • Add POST /ocr/thumbnail to the OCR service (Python/FastAPI). Request: { pdfUrl: string }. Response: JPEG binary (Content-Type: image/jpeg). Implementation: call _download_and_convert_pdf(pdfUrl), take images[0], resize to 360×492px (2× retina density for 180×246px display size), return as JPEG with quality 85.

  • Fire thumbnail generation asynchronously on document upload, not on first dashboard view. Use @Async on ThumbnailService.generateAndStore(UUID documentId). If it fails, log and leave thumbnailPath = null — the hero card renders the SVG placeholder fallback per spec.

  • Store thumbnails in the same MinIO bucket (archive-documents) under a thumbnails/ prefix. Key: thumbnails/{documentId}_p1.jpg. Keeps backup and restore scope unified — one bucket, one backup policy.

  • Cache-Control on presigned thumbnail URLs. Set Cache-Control: max-age=86400 on presigned URL responses from FileService. Thumbnails don't change unless the document is replaced (rare). This prevents redundant signed URL regeneration on every dashboard reload.

  • Add a Grafana panel tracking MinIO bucket size by prefix (thumbnails/ vs documents/). Cheap to add now, avoids surprises at scale. Alert threshold: >2GB total (generous ceiling for a family archive).

## ⚙️ Tobias Wendt — DevOps & Platform Engineer ### Observations - **Thumbnail generation is the only infrastructure question here.** Everything else (3 new API endpoints, 6 new Svelte components) runs on the existing stack with zero config changes. - **Generate-once is the right approach.** On-the-fly thumbnail rendering means one PDF-to-JPEG conversion per dashboard viewer. The OCR service already loads Surya/Kraken models at startup — adding a lightweight thumbnail render to that container is a 3-line change, but making it synchronous per-request is wrong. A concurrent-users spike would cause memory contention with the loaded OCR models. - **The OCR service already has everything needed.** It imports `pdf2image` and `Pillow` (both used in the existing OCR pipeline). A `POST /ocr/thumbnail` endpoint that extracts page 1 as a 300×400px JPEG is a small addition — reusing `_download_and_convert_pdf()` which already handles MinIO fetching. Tobias checked: this function exists at line ~80 of `main.py`. - **Storage overhead is negligible.** A 180×246px JPEG thumbnail is ~15-30KB. 10,000 documents = 150-300MB. At Hetzner Object Storage pricing (~€0.02/GB), that's under €1/year. No concern. - **No new volumes, no new services, no new compose entries needed.** The thumbnail path is just another object in the existing `archive-documents` MinIO bucket. No structural changes to `docker-compose.yml`. - **One new async flow to document:** `FileService.uploadDocument()` → fires `@Async ThumbnailService.generateThumbnail(documentId)` → calls OCR service `POST /ocr/thumbnail` → stores result in MinIO → sets `document.thumbnailPath`. This is the same pattern as the existing OCR trigger. Document it in a comment in `ThumbnailService` so the next person understands the async boundary. ### Recommendations - **Add `POST /ocr/thumbnail` to the OCR service (Python/FastAPI).** Request: `{ pdfUrl: string }`. Response: JPEG binary (`Content-Type: image/jpeg`). Implementation: call `_download_and_convert_pdf(pdfUrl)`, take `images[0]`, resize to 360×492px (2× retina density for 180×246px display size), return as JPEG with quality 85. - **Fire thumbnail generation asynchronously on document upload**, not on first dashboard view. Use `@Async` on `ThumbnailService.generateAndStore(UUID documentId)`. If it fails, log and leave `thumbnailPath = null` — the hero card renders the SVG placeholder fallback per spec. - **Store thumbnails in the same MinIO bucket** (`archive-documents`) under a `thumbnails/` prefix. Key: `thumbnails/{documentId}_p1.jpg`. Keeps backup and restore scope unified — one bucket, one backup policy. - **Cache-Control on presigned thumbnail URLs.** Set `Cache-Control: max-age=86400` on presigned URL responses from `FileService`. Thumbnails don't change unless the document is replaced (rare). This prevents redundant signed URL regeneration on every dashboard reload. - **Add a Grafana panel** tracking MinIO bucket size by prefix (`thumbnails/` vs `documents/`). Cheap to add now, avoids surprises at scale. Alert threshold: >2GB total (generous ceiling for a family archive).
Author
Owner

🗳️ Decision Queue — Action Required

1 decision needs your input before implementation starts.

Activity Feed

  • Activity feed deduplication — If a user (or the same user) acts on the same document multiple times in quick succession (e.g. saves a transcription block 3 times in one hour), should the GET /api/dashboard/activity feed show 3 separate entries or collapse them into 1?

    • Option A — No dedup (show all): Simpler backend query. Feed can look noisy if someone is actively working on a document.
    • Option B — Collapse per user/document/kind per hour: Cleaner feed. Slightly more complex grouping query (GROUP BY actor, document, kind, time_bucket). The "vor 12 Min." timestamp would reflect the most recent action in the bucket.

    (Raised by: Sara)

## 🗳️ Decision Queue — Action Required _1 decision needs your input before implementation starts._ ### Activity Feed - **Activity feed deduplication** — If a user (or the same user) acts on the same document multiple times in quick succession (e.g. saves a transcription block 3 times in one hour), should the `GET /api/dashboard/activity` feed show 3 separate entries or collapse them into 1? - **Option A — No dedup (show all):** Simpler backend query. Feed can look noisy if someone is actively working on a document. - **Option B — Collapse per user/document/kind per hour:** Cleaner feed. Slightly more complex grouping query (`GROUP BY actor, document, kind, time_bucket`). The "vor 12 Min." timestamp would reflect the most recent action in the bucket. _(Raised by: Sara)_
Author
Owner

🏛️ Markus Keller — Architecture follow-up: audit log prerequisite

Following the discussion about Family Pulse data sources, we concluded that Document.updatedAt is too coarse to drive reliable stats — it fires on every field change with no actor context and no semantic distinction between "user saved transcription text" and "system updated status."

Decision: build a dedicated audit log first, then the dashboard on top of it.

Created #274 — feat(audit): domain-level audit log for archive activity as a prerequisite for this issue.

Key decisions made in this discussion:

  • Pulse should reflect all meaningful activity — transcription block saves, annotation creation, and document metadata changes (sender, tags, date) — not just document status transitions.
  • status_changed_at migration (previously proposed V46) is dropped. The audit log supersedes it cleanly.
  • Document.updatedAt accepted as slightly imprecise for metadata events — the audit log will be the authoritative source going forward.
  • Activity feed and Pulse both draw from audit_log, not from ad-hoc queries across multiple tables.

Implementation sequence update: #274 must be merged before the dashboard backend endpoints in this issue can be implemented.

## 🏛️ Markus Keller — Architecture follow-up: audit log prerequisite Following the discussion about Family Pulse data sources, we concluded that `Document.updatedAt` is too coarse to drive reliable stats — it fires on every field change with no actor context and no semantic distinction between "user saved transcription text" and "system updated status." **Decision:** build a dedicated audit log first, then the dashboard on top of it. Created **#274 — feat(audit): domain-level audit log for archive activity** as a prerequisite for this issue. Key decisions made in this discussion: - **Pulse should reflect all meaningful activity** — transcription block saves, annotation creation, and document metadata changes (sender, tags, date) — not just document status transitions. - **`status_changed_at` migration (previously proposed V46) is dropped.** The audit log supersedes it cleanly. - **`Document.updatedAt` accepted as slightly imprecise for metadata events** — the audit log will be the authoritative source going forward. - **Activity feed and Pulse both draw from `audit_log`**, not from ad-hoc queries across multiple tables. Implementation sequence update: **#274 must be merged before the dashboard backend endpoints in this issue can be implemented.**
Author
Owner

🏛️ Markus Keller — Architecture discussion summary

Follow-up discussion resolving all remaining open items for the dashboard backend design.

Resolved: Family Pulse stats

All stats and their data sources are now fully defined:

Element Label (DE) Query
Headline "N Seiten bearbeitet" COUNT(DISTINCT (document_id, payload->>'pageNumber')) on ANNOTATION_CREATED + TEXT_SAVED events in audit_log
Stat 1 "X Textstellen markiert" COUNT(*) on ANNOTATION_CREATED — each annotation is created once
Stat 2 "Y Textstellen transkribiert" COUNT(DISTINCT entity_id) on TEXT_SAVED — distinct blocks with text, not raw saves
Stat 3 "Z Dokumente hochgeladen" COUNT(DISTINCT document_id) on FILE_UPLOADED

pageNumber is written into the audit event payload at write time (see #274) so the headline query never needs a join.

Resolved: Activity feed deduplication

Collapse per actor/document/kind per hour. Query uses DISTINCT ON (actor_id, document_id, kind, date_trunc('hour', happened_at)) ordered by most recent within each bucket. The raw events remain in audit_log for stats — only the feed view is collapsed.

Resolved: DashboardController package

Lives in its own dashboard/ package — org.raddatz.familienarchiv.dashboard. It aggregates across documents, persons, and audit log; placing it inside document/ would make that package a cross-domain god module.

Updated i18n keys

pulse_uploaded changes from "Seiten hochgeladen" to "Dokumente hochgeladen". pulse_headline changes from "Ihr habt zusammen {pages} Seiten bearbeitet" to "Ihr habt {pages} Seiten bearbeitet".

Implementation sequence (updated)

#274 must be merged first. Dashboard backend endpoints depend entirely on audit_log being populated. No dashboard endpoint can be implemented correctly without it.

## 🏛️ Markus Keller — Architecture discussion summary Follow-up discussion resolving all remaining open items for the dashboard backend design. ### Resolved: Family Pulse stats All stats and their data sources are now fully defined: | Element | Label (DE) | Query | |---|---|---| | Headline | "N Seiten bearbeitet" | `COUNT(DISTINCT (document_id, payload->>'pageNumber'))` on `ANNOTATION_CREATED` + `TEXT_SAVED` events in `audit_log` | | Stat 1 | "X Textstellen markiert" | `COUNT(*)` on `ANNOTATION_CREATED` — each annotation is created once | | Stat 2 | "Y Textstellen transkribiert" | `COUNT(DISTINCT entity_id)` on `TEXT_SAVED` — distinct blocks with text, not raw saves | | Stat 3 | "Z Dokumente hochgeladen" | `COUNT(DISTINCT document_id)` on `FILE_UPLOADED` | `pageNumber` is written into the audit event `payload` at write time (see #274) so the headline query never needs a join. ### Resolved: Activity feed deduplication Collapse per actor/document/kind per **hour**. Query uses `DISTINCT ON (actor_id, document_id, kind, date_trunc('hour', happened_at))` ordered by most recent within each bucket. The raw events remain in `audit_log` for stats — only the feed view is collapsed. ### Resolved: DashboardController package Lives in its own `dashboard/` package — `org.raddatz.familienarchiv.dashboard`. It aggregates across documents, persons, and audit log; placing it inside `document/` would make that package a cross-domain god module. ### Updated i18n keys `pulse_uploaded` changes from "Seiten hochgeladen" to "Dokumente hochgeladen". `pulse_headline` changes from "Ihr habt zusammen {pages} Seiten bearbeitet" to "Ihr habt {pages} Seiten bearbeitet". ### Implementation sequence (updated) **#274 must be merged first.** Dashboard backend endpoints depend entirely on `audit_log` being populated. No dashboard endpoint can be implemented correctly without it.
Author
Owner

🏛️ Markus Keller — Audit Log Impact on Dashboard Architecture

PR #275 is the right foundation. Three of the four dashboard data problems are now solvable. Here's the precise accounting.

What #275 unblocks

Family Pulse — all three stats are now queryable from audit_log:

  • Stat 1 "Textstellen markiert" → COUNT(*) WHERE kind = 'ANNOTATION_CREATED' AND happened_at >= week_start
  • Stat 3 "Dokumente hochgeladen" → COUNT(DISTINCT document_id) WHERE kind = 'FILE_UPLOADED' AND happened_at >= week_start
  • Pulse headline "N Seiten bearbeitet" → COUNT(DISTINCT (document_id, payload->>'pageNumber')) WHERE kind IN ('ANNOTATION_CREATED','TEXT_SAVED')

Resume cardSELECT document_id FROM audit_log WHERE kind = 'TEXT_SAVED' AND actor_id = :userId ORDER BY happened_at DESC LIMIT 1

Activity feed core eventsTEXT_SAVED, FILE_UPLOADED, ANNOTATION_CREATED, BLOCK_REVIEWED are all populated.

Data gap: TEXT_SAVED payload is missing blockId

My own comment #3354 specified: "COUNT(DISTINCT entity_id) on TEXT_SAVED — distinct blocks with text, not raw saves." The current payload is {pageNumber: N} — there is no block/entity ID. One page can have multiple blocks; two saves on different blocks on the same page are indistinguishable.

Fix before data accumulates: Add blockId to the TEXT_SAVED payload in PR #275 or as a follow-up before merging. Retroactive fix is impossible. The change is minimal:

auditService.logAfterCommit(AuditKind.TEXT_SAVED, userId, documentId,
    Map.of("pageNumber", pageNumber, "blockId", saved.getId().toString()));

Pulse stat 2 then becomes COUNT(DISTINCT payload->>'blockId') WHERE kind = 'TEXT_SAVED'.

Data gap: comment/mention activity is not in the audit log

The spec's activity feed shows two item types that #275 does not cover: "Klaus hat dich erwähnt" and "Lotte hat geantwortet auf deinem Kommentar". These require DocumentComment events. My comment #3354 said "activity feed draws from audit_log" — that statement assumed comment events would be added too. They weren't.

Decision needed: Either (a) add COMMENT_ADDED and MENTION_CREATED events to the audit log (cleaner — one data source), or (b) let DashboardController merge audit events with a separate comment query (two sources, more complex JOIN at query time). Option A is architecturally cleaner. Flagging as an open decision below.

Query layer design

AuditLogRepository currently extends JpaRepository with no custom queries. The dashboard queries require:

  • PostgreSQL's DISTINCT ON for activity dedup (JPQL cannot express this)
  • JSONB operator payload->>'pageNumber' (JPQL cannot express this)
  • date_trunc('hour', happened_at) for time bucketing

All three require @Query(nativeQuery = true). I recommend a dedicated AuditLogQueryService (not methods dumped into AuditLogRepository) to keep the main repository clean and the dashboard queries in one place — consistent with how DashboardController lives in its own dashboard/ package.

Minor: @Async log() is unreachable in practice

AuditService.log() is annotated @Async("auditExecutor") but every call site uses logAfterCommit() instead. The auditExecutor thread pool is configured and running but executes zero tasks. Either document the intended use case or remove log() before the next PR adds a third call site and picks the wrong method.

Open Decisions

  • Comment events in audit log — Add COMMENT_ADDED/MENTION_CREATED events to audit_log (one unified data source for DashboardController) vs. merge comment table query at controller layer (two sources). Option A requires expanding PR #275 or a follow-up before dashboard backend starts. (Raised by: Markus, impacts Felix's DashboardController design)
## 🏛️ Markus Keller — Audit Log Impact on Dashboard Architecture PR #275 is the right foundation. Three of the four dashboard data problems are now solvable. Here's the precise accounting. ### What #275 unblocks **Family Pulse — all three stats are now queryable from `audit_log`:** - Stat 1 "Textstellen markiert" → `COUNT(*) WHERE kind = 'ANNOTATION_CREATED' AND happened_at >= week_start` - Stat 3 "Dokumente hochgeladen" → `COUNT(DISTINCT document_id) WHERE kind = 'FILE_UPLOADED' AND happened_at >= week_start` - Pulse headline "N Seiten bearbeitet" → `COUNT(DISTINCT (document_id, payload->>'pageNumber')) WHERE kind IN ('ANNOTATION_CREATED','TEXT_SAVED')` **Resume card** → `SELECT document_id FROM audit_log WHERE kind = 'TEXT_SAVED' AND actor_id = :userId ORDER BY happened_at DESC LIMIT 1` **Activity feed core events** → `TEXT_SAVED`, `FILE_UPLOADED`, `ANNOTATION_CREATED`, `BLOCK_REVIEWED` are all populated. ### Data gap: `TEXT_SAVED` payload is missing `blockId` My own comment #3354 specified: *"COUNT(DISTINCT entity_id) on TEXT_SAVED — distinct blocks with text, not raw saves."* The current payload is `{pageNumber: N}` — there is no block/entity ID. One page can have multiple blocks; two saves on different blocks on the same page are indistinguishable. **Fix before data accumulates:** Add `blockId` to the `TEXT_SAVED` payload in PR #275 or as a follow-up before merging. Retroactive fix is impossible. The change is minimal: ```java auditService.logAfterCommit(AuditKind.TEXT_SAVED, userId, documentId, Map.of("pageNumber", pageNumber, "blockId", saved.getId().toString())); ``` Pulse stat 2 then becomes `COUNT(DISTINCT payload->>'blockId') WHERE kind = 'TEXT_SAVED'`. ### Data gap: comment/mention activity is not in the audit log The spec's activity feed shows two item types that #275 does not cover: `"Klaus hat dich erwähnt"` and `"Lotte hat geantwortet auf deinem Kommentar"`. These require `DocumentComment` events. My comment #3354 said "activity feed draws from `audit_log`" — that statement assumed comment events would be added too. They weren't. **Decision needed:** Either (a) add `COMMENT_ADDED` and `MENTION_CREATED` events to the audit log (cleaner — one data source), or (b) let `DashboardController` merge audit events with a separate comment query (two sources, more complex JOIN at query time). Option A is architecturally cleaner. Flagging as an open decision below. ### Query layer design `AuditLogRepository` currently extends `JpaRepository` with no custom queries. The dashboard queries require: - PostgreSQL's `DISTINCT ON` for activity dedup (JPQL cannot express this) - JSONB operator `payload->>'pageNumber'` (JPQL cannot express this) - `date_trunc('hour', happened_at)` for time bucketing All three require `@Query(nativeQuery = true)`. I recommend a dedicated `AuditLogQueryService` (not methods dumped into `AuditLogRepository`) to keep the main repository clean and the dashboard queries in one place — consistent with how `DashboardController` lives in its own `dashboard/` package. ### Minor: `@Async log()` is unreachable in practice `AuditService.log()` is annotated `@Async("auditExecutor")` but every call site uses `logAfterCommit()` instead. The `auditExecutor` thread pool is configured and running but executes zero tasks. Either document the intended use case or remove `log()` before the next PR adds a third call site and picks the wrong method. ### Open Decisions - **Comment events in audit log** — Add `COMMENT_ADDED`/`MENTION_CREATED` events to `audit_log` (one unified data source for `DashboardController`) vs. merge comment table query at controller layer (two sources). Option A requires expanding PR #275 or a follow-up before dashboard backend starts. _(Raised by: Markus, impacts Felix's DashboardController design)_
Author
Owner

👨‍💻 Felix Brandt — Audit Log Impact on Dashboard Implementation

Observations

The event guards are correct and audit-accurate.

  • TEXT_SAVED only fires on !text.equals(previousText) — rapid re-saves of identical text produce no events. Pulse stat 2 won't overcount.
  • BLOCK_REVIEWED only fires on false → true transitions — toggling a review off emits nothing. Activity feed won't show "un-review" noise.
  • ANNOTATION_CREATED fires for manual annotations only (createOcrAnnotation is excluded, confirmed by createOcrAnnotation_doesNotLogAuditEvent). Pulse stat 1 counts user intent, not OCR automation.

requireUserId(Authentication) is duplicated verbatim.
Both DocumentController and TranscriptionBlockController contain this identical private method:

private UUID requireUserId(Authentication authentication) {
    if (authentication == null || !authentication.isAuthenticated()) { ... }
    AppUser user = userService.findByEmail(authentication.getName());
    if (user == null) { throw DomainException.unauthorized("User not found"); }
    return user.getId();
}

The dashboard DashboardController will need actor resolution too. Before it lands, extract this to a SecurityUtils.requireUserId(Authentication auth, UserService userService) static helper or move it to a shared base controller class. Three copies is the breaking point.

AuditLogRepository needs native SQL from the start — plan for it.
The dedup query for the activity feed is:

SELECT DISTINCT ON (actor_id, document_id, kind, date_trunc('hour', happened_at))
    actor_id, document_id, kind, happened_at, payload
FROM audit_log
WHERE happened_at >= :weekStart
ORDER BY actor_id, document_id, kind, date_trunc('hour', happened_at), happened_at DESC

JPQL cannot express DISTINCT ON or date_trunc. Write this as @Query(nativeQuery = true) — and write the failing test for it first, using a real Testcontainers Postgres so the PostgreSQL-specific syntax is actually validated.

TEXT_SAVED payload is missing blockId — Pulse stat 2 is underspecified.
Markus flagged this above. From an implementation standpoint: the block ID is available at the call site in TranscriptionService.updateBlock() — it's saved.getId(). Add it to the payload now:

auditService.logAfterCommit(AuditKind.TEXT_SAVED, userId, documentId,
    Map.of("pageNumber", pageNumber, "blockId", saved.getId().toString()));

The dashboard can then COUNT(DISTINCT payload->>'blockId') for stat 2. Existing data from before this fix will be inaccurate — accept that and document it in the migration.

@Async log() is unreachable.
All six instrumented call sites use logAfterCommit(). The @Async("auditExecutor") annotation on log() does nothing in practice. Before DashboardController is built, clarify the contract: if logAfterCommit is the standard, remove log() or add a prominent comment explaining when each is appropriate. An undocumented overload in a cross-cutting service will be misused.

Recommendations

  • Extract requireUserId to a shared utility before the next controller adds it.
  • Add blockId to TEXT_SAVED payload in the current PR or an immediate follow-up — do not wait until dashboard queries are being written.
  • Write AuditLogQueryService with @Query(nativeQuery = true) methods as the first TDD step of the dashboard backend work: red tests against Testcontainers Postgres → green queries.
  • I18n mapping for AuditKind → German action string is needed in the frontend: TEXT_SAVED"hat transkribiert", FILE_UPLOADED"hat hochgeladen", ANNOTATION_CREATED"hat markiert", BLOCK_REVIEWED"hat geprüft", METADATA_UPDATED"hat bearbeitet". These keys should be added to messages/de.json before any component renders a feed item.
## 👨‍💻 Felix Brandt — Audit Log Impact on Dashboard Implementation ### Observations **The event guards are correct and audit-accurate.** - `TEXT_SAVED` only fires on `!text.equals(previousText)` — rapid re-saves of identical text produce no events. Pulse stat 2 won't overcount. - `BLOCK_REVIEWED` only fires on `false → true` transitions — toggling a review off emits nothing. Activity feed won't show "un-review" noise. - `ANNOTATION_CREATED` fires for manual annotations only (`createOcrAnnotation` is excluded, confirmed by `createOcrAnnotation_doesNotLogAuditEvent`). Pulse stat 1 counts user intent, not OCR automation. **`requireUserId(Authentication)` is duplicated verbatim.** Both `DocumentController` and `TranscriptionBlockController` contain this identical private method: ```java private UUID requireUserId(Authentication authentication) { if (authentication == null || !authentication.isAuthenticated()) { ... } AppUser user = userService.findByEmail(authentication.getName()); if (user == null) { throw DomainException.unauthorized("User not found"); } return user.getId(); } ``` The dashboard `DashboardController` will need actor resolution too. Before it lands, extract this to a `SecurityUtils.requireUserId(Authentication auth, UserService userService)` static helper or move it to a shared base controller class. Three copies is the breaking point. **`AuditLogRepository` needs native SQL from the start — plan for it.** The dedup query for the activity feed is: ```sql SELECT DISTINCT ON (actor_id, document_id, kind, date_trunc('hour', happened_at)) actor_id, document_id, kind, happened_at, payload FROM audit_log WHERE happened_at >= :weekStart ORDER BY actor_id, document_id, kind, date_trunc('hour', happened_at), happened_at DESC ``` JPQL cannot express `DISTINCT ON` or `date_trunc`. Write this as `@Query(nativeQuery = true)` — and write the failing test for it first, using a real Testcontainers Postgres so the PostgreSQL-specific syntax is actually validated. **`TEXT_SAVED` payload is missing `blockId` — Pulse stat 2 is underspecified.** Markus flagged this above. From an implementation standpoint: the block ID is available at the call site in `TranscriptionService.updateBlock()` — it's `saved.getId()`. Add it to the payload now: ```java auditService.logAfterCommit(AuditKind.TEXT_SAVED, userId, documentId, Map.of("pageNumber", pageNumber, "blockId", saved.getId().toString())); ``` The dashboard can then `COUNT(DISTINCT payload->>'blockId')` for stat 2. Existing data from before this fix will be inaccurate — accept that and document it in the migration. **`@Async log()` is unreachable.** All six instrumented call sites use `logAfterCommit()`. The `@Async("auditExecutor")` annotation on `log()` does nothing in practice. Before `DashboardController` is built, clarify the contract: if `logAfterCommit` is the standard, remove `log()` or add a prominent comment explaining when each is appropriate. An undocumented overload in a cross-cutting service will be misused. ### Recommendations - Extract `requireUserId` to a shared utility before the next controller adds it. - Add `blockId` to `TEXT_SAVED` payload in the current PR or an immediate follow-up — do not wait until dashboard queries are being written. - Write `AuditLogQueryService` with `@Query(nativeQuery = true)` methods as the first TDD step of the dashboard backend work: red tests against Testcontainers Postgres → green queries. - I18n mapping for `AuditKind` → German action string is needed in the frontend: `TEXT_SAVED` → `"hat transkribiert"`, `FILE_UPLOADED` → `"hat hochgeladen"`, `ANNOTATION_CREATED` → `"hat markiert"`, `BLOCK_REVIEWED` → `"hat geprüft"`, `METADATA_UPDATED` → `"hat bearbeitet"`. These keys should be added to `messages/de.json` before any component renders a feed item.
Author
Owner

🔒 Nora "NullX" Steiner — Audit Log Security Impact on Dashboard

Observations

Append-only enforcement is solid.
REVOKE UPDATE, DELETE ON audit_log FROM app_user in V46 means the application DB role cannot tamper with the audit trail after the fact. This is the right control — database-layer enforcement that no application bug can bypass.

GDPR ON DELETE SET NULL is correct, but creates a live edge case for the dashboard.
When a user exercises the right to erasure, their actor_id becomes null in existing audit_log rows. The DashboardController will query actor IDs to join against app_users for display names and initials. A null actor_id must not cause a 500 error.

Concrete risk: GET /api/dashboard/activity must handle actor_id IS NULL gracefully. If the query does an INNER JOIN to app_users, rows from deleted users simply vanish from the feed — silent data loss. If it does a LEFT JOIN, the activity row appears with a null actor. The frontend component must handle actor: null without crashing.

Recommended response shape:

record ActivityActor(String initials, String color) {}
// actor is null when the user has been deleted (GDPR erasure)
record ActivityFeedItem(AuditKind kind, @Nullable ActivityActor actor, 
                        UUID documentId, String documentTitle, OffsetDateTime happenedAt) {}

Dashboard must not expose actor_id UUIDs in the API response.
The spec defines activity feed actors as { name, initials, color }. The DTO projection at the DashboardController layer must strip actor_id before the response leaves the server. An inadvertent actorId field in the response DTO would expose internal UUIDs that map to users — not a critical vulnerability given READ_ALL scoping, but unnecessary data exposure.

requireUserId does a DB lookup on every write request.
userService.findByEmail(authentication.getName()) is called inside requireUserId() on every PUT /api/documents/{id}, POST /api/documents/quick-upload, and PUT /api/documents/{id}/transcription-blocks/{blockId}/review. This is a new DB round-trip that didn't exist before #275. Not a security issue, but it's an attack surface in that a slow or locked app_users table now blocks write operations. If Spring Security can be configured to store the user UUID in the principal (e.g., via a custom UserDetails implementation), this lookup could be eliminated. Worth evaluating before the dashboard adds more write endpoints that need actor IDs.

The logAfterCommit() after-commit timing is safe from a security standpoint.
The audit write happens synchronously after the main transaction commits. If the audit write itself fails (DB timeout, pool exhaustion), the try-catch in writeLog() logs and swallows the exception. The main operation succeeds, the audit row is silently lost. This is the right trade-off for a family archive — failing the user's transcription save because the audit log is unavailable would be wrong. But it means audit completeness is best-effort, not guaranteed. Document this explicitly in AuditService so a future reviewer doesn't "fix" the swallowed exception.

Recommendations

  • Add null-actor handling to the activity feed query and DTO before any dashboard component renders actor data.
  • Add @Nullable annotation to actor field in the activity feed DTO — makes the contract explicit for TypeScript codegen.
  • Add a comment in AuditService.writeLog() explaining the swallow: "Audit log is best-effort — failure must not block the domain operation."
  • Add a test: activity_feed_excludes_or_handles_deleted_user_gracefully() before the dashboard activity endpoint ships.
## 🔒 Nora "NullX" Steiner — Audit Log Security Impact on Dashboard ### Observations **Append-only enforcement is solid.** `REVOKE UPDATE, DELETE ON audit_log FROM app_user` in V46 means the application DB role cannot tamper with the audit trail after the fact. This is the right control — database-layer enforcement that no application bug can bypass. **GDPR `ON DELETE SET NULL` is correct, but creates a live edge case for the dashboard.** When a user exercises the right to erasure, their `actor_id` becomes null in existing `audit_log` rows. The `DashboardController` will query actor IDs to join against `app_users` for display names and initials. A null `actor_id` must not cause a 500 error. **Concrete risk:** `GET /api/dashboard/activity` must handle `actor_id IS NULL` gracefully. If the query does an INNER JOIN to `app_users`, rows from deleted users simply vanish from the feed — silent data loss. If it does a LEFT JOIN, the activity row appears with a null actor. The frontend component must handle `actor: null` without crashing. Recommended response shape: ```java record ActivityActor(String initials, String color) {} // actor is null when the user has been deleted (GDPR erasure) record ActivityFeedItem(AuditKind kind, @Nullable ActivityActor actor, UUID documentId, String documentTitle, OffsetDateTime happenedAt) {} ``` **Dashboard must not expose `actor_id` UUIDs in the API response.** The spec defines activity feed actors as `{ name, initials, color }`. The DTO projection at the `DashboardController` layer must strip `actor_id` before the response leaves the server. An inadvertent `actorId` field in the response DTO would expose internal UUIDs that map to users — not a critical vulnerability given `READ_ALL` scoping, but unnecessary data exposure. **`requireUserId` does a DB lookup on every write request.** `userService.findByEmail(authentication.getName())` is called inside `requireUserId()` on every `PUT /api/documents/{id}`, `POST /api/documents/quick-upload`, and `PUT /api/documents/{id}/transcription-blocks/{blockId}/review`. This is a new DB round-trip that didn't exist before #275. Not a security issue, but it's an attack surface in that a slow or locked `app_users` table now blocks write operations. If Spring Security can be configured to store the user UUID in the principal (e.g., via a custom `UserDetails` implementation), this lookup could be eliminated. Worth evaluating before the dashboard adds more write endpoints that need actor IDs. **The `logAfterCommit()` after-commit timing is safe from a security standpoint.** The audit write happens synchronously after the main transaction commits. If the audit write itself fails (DB timeout, pool exhaustion), the `try-catch` in `writeLog()` logs and swallows the exception. The main operation succeeds, the audit row is silently lost. This is the right trade-off for a family archive — failing the user's transcription save because the audit log is unavailable would be wrong. But it means audit completeness is best-effort, not guaranteed. Document this explicitly in `AuditService` so a future reviewer doesn't "fix" the swallowed exception. ### Recommendations - Add null-actor handling to the activity feed query and DTO before any dashboard component renders actor data. - Add `@Nullable` annotation to `actor` field in the activity feed DTO — makes the contract explicit for TypeScript codegen. - Add a comment in `AuditService.writeLog()` explaining the swallow: *"Audit log is best-effort — failure must not block the domain operation."* - Add a test: `activity_feed_excludes_or_handles_deleted_user_gracefully()` before the dashboard activity endpoint ships.
Author
Owner

🧪 Sara Holt — Audit Log Test Coverage and Dashboard Test Plan

Observations

The logAfterCommit timing test is the standout in PR #275.
logAfterCommit_registersCallback_andSavesOnlyAfterCommit_whenTransactionIsActive manually captures the TransactionSynchronization callback, verifies the repo is not called before commit, then fires afterCommit() and verifies it is called. This is the right level of granularity — it proves the timing contract, not just the happy path.

Integration tests for Pulse stats are now unblocked.
Previously (comment #3340) I flagged that updated_at made Pulse integration tests unreliable — any metadata edit would skew counts. With audit_log.happened_at, we can insert precise test fixtures. The Testcontainers integration test I outlined is now implementable:

@Test
void pulse_counts_only_current_week_text_saved_events() {
    UUID actor = insertUser();
    UUID doc = insertDocument();
    // 3 events this week
    insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 1, "blockId", "b1"), now().minusDays(1));
    insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 2, "blockId", "b2"), now().minusDays(2));
    insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 1, "blockId", "b1"), now().minusDays(3));
    // 1 event last week
    insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 3, "blockId", "b3"), now().minusWeeks(2));

    PulseStats stats = dashboardService.getPulseStats(Period.WEEK);

    assertThat(stats.transcribedBlocks()).isEqualTo(2); // distinct blockIds: b1, b2
}

This test should be written before the Pulse query is implemented.

Missing test: null actor_id in activity feed.
When a user is deleted (GDPR erasure), their rows have actor_id = NULL. Neither AuditServiceTest nor the new DocumentServiceTest cases cover what happens when the dashboard queries encounter null-actor rows. This is an edge case that will silently fail in production on the first user deletion.

Add before dashboard ships:

@Test
void activity_feed_handles_deleted_user_gracefully() {
    // insert audit event with actor_id = null (simulates GDPR erasure)
    insertAuditEvent(TEXT_SAVED, null, docId, Map.of("pageNumber", 1, "blockId", "b1"), now());
    
    List<ActivityFeedItem> feed = dashboardService.getActivityFeed();
    
    assertThat(feed).hasSize(1);
    assertThat(feed.get(0).actor()).isNull(); // or "Unbekannter Benutzer" — whatever is decided
}

Deduplication boundary test is critical and currently unspecified.
The resolved dedup strategy is: collapse per actor/document/kind per hour. The boundary condition needs an explicit test:

@Test
void activity_feed_collapses_same_actor_document_kind_within_one_hour() {
    // Three saves on same doc by same actor within one hour
    insertAuditEvent(TEXT_SAVED, actor, doc, payload, now().minusMinutes(50));
    insertAuditEvent(TEXT_SAVED, actor, doc, payload, now().minusMinutes(30));
    insertAuditEvent(TEXT_SAVED, actor, doc, payload, now().minusMinutes(10));
    
    List<ActivityFeedItem> feed = dashboardService.getActivityFeed();
    
    assertThat(feed).hasSize(1);
    assertThat(feed.get(0).happenedAt()).isCloseTo(now().minusMinutes(10), within(1, MINUTES));
}

The timestamp of the collapsed row must be the latest in the bucket (spec shows "vor 12 Min." — the most recent action). This needs an explicit assertion on the returned happenedAt.

Recommendations

  • Write the Pulse integration test above as the first test in DashboardServiceIntegrationTest, before any query code exists. It documents the exact happened_at boundary condition.
  • Write the null-actor test before the activity feed endpoint ships — it will fail on first user deletion otherwise.
  • Write the dedup boundary test with explicit assertion on happenedAt being MAX() within the bucket.
  • The createOcrAnnotation_doesNotLogAuditEvent test should have a comment explaining why OCR annotations are excluded from audit — otherwise a future developer will "fix" this deliberate omission.
## 🧪 Sara Holt — Audit Log Test Coverage and Dashboard Test Plan ### Observations **The `logAfterCommit` timing test is the standout in PR #275.** `logAfterCommit_registersCallback_andSavesOnlyAfterCommit_whenTransactionIsActive` manually captures the `TransactionSynchronization` callback, verifies the repo is *not* called before commit, then fires `afterCommit()` and verifies it *is* called. This is the right level of granularity — it proves the timing contract, not just the happy path. **Integration tests for Pulse stats are now unblocked.** Previously (comment #3340) I flagged that `updated_at` made Pulse integration tests unreliable — any metadata edit would skew counts. With `audit_log.happened_at`, we can insert precise test fixtures. The Testcontainers integration test I outlined is now implementable: ```java @Test void pulse_counts_only_current_week_text_saved_events() { UUID actor = insertUser(); UUID doc = insertDocument(); // 3 events this week insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 1, "blockId", "b1"), now().minusDays(1)); insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 2, "blockId", "b2"), now().minusDays(2)); insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 1, "blockId", "b1"), now().minusDays(3)); // 1 event last week insertAuditEvent(TEXT_SAVED, actor, doc, Map.of("pageNumber", 3, "blockId", "b3"), now().minusWeeks(2)); PulseStats stats = dashboardService.getPulseStats(Period.WEEK); assertThat(stats.transcribedBlocks()).isEqualTo(2); // distinct blockIds: b1, b2 } ``` This test should be written *before* the Pulse query is implemented. **Missing test: null `actor_id` in activity feed.** When a user is deleted (GDPR erasure), their rows have `actor_id = NULL`. Neither `AuditServiceTest` nor the new `DocumentServiceTest` cases cover what happens when the dashboard queries encounter null-actor rows. This is an edge case that will silently fail in production on the first user deletion. Add before dashboard ships: ```java @Test void activity_feed_handles_deleted_user_gracefully() { // insert audit event with actor_id = null (simulates GDPR erasure) insertAuditEvent(TEXT_SAVED, null, docId, Map.of("pageNumber", 1, "blockId", "b1"), now()); List<ActivityFeedItem> feed = dashboardService.getActivityFeed(); assertThat(feed).hasSize(1); assertThat(feed.get(0).actor()).isNull(); // or "Unbekannter Benutzer" — whatever is decided } ``` **Deduplication boundary test is critical and currently unspecified.** The resolved dedup strategy is: collapse per actor/document/kind per hour. The boundary condition needs an explicit test: ```java @Test void activity_feed_collapses_same_actor_document_kind_within_one_hour() { // Three saves on same doc by same actor within one hour insertAuditEvent(TEXT_SAVED, actor, doc, payload, now().minusMinutes(50)); insertAuditEvent(TEXT_SAVED, actor, doc, payload, now().minusMinutes(30)); insertAuditEvent(TEXT_SAVED, actor, doc, payload, now().minusMinutes(10)); List<ActivityFeedItem> feed = dashboardService.getActivityFeed(); assertThat(feed).hasSize(1); assertThat(feed.get(0).happenedAt()).isCloseTo(now().minusMinutes(10), within(1, MINUTES)); } ``` The timestamp of the collapsed row must be the *latest* in the bucket (spec shows "vor 12 Min." — the most recent action). This needs an explicit assertion on the returned `happenedAt`. ### Recommendations - Write the Pulse integration test above as the *first* test in `DashboardServiceIntegrationTest`, before any query code exists. It documents the exact `happened_at` boundary condition. - Write the null-actor test before the activity feed endpoint ships — it will fail on first user deletion otherwise. - Write the dedup boundary test with explicit assertion on `happenedAt` being `MAX()` within the bucket. - The `createOcrAnnotation_doesNotLogAuditEvent` test should have a comment explaining *why* OCR annotations are excluded from audit — otherwise a future developer will "fix" this deliberate omission.
Author
Owner

🎨 Leonie Voss — Audit Log Impact on Dashboard UX

Observations

Comment and mention activity is not covered by PR #275 — the "für dich" badge is blocked.

The spec's activity feed mockup (§6) shows five item types:

  1. "Anna hat transkribiert Postkarte aus Wien" → covered by TEXT_SAVED
  2. "Klaus hat dich erwähnt in Brief an Frieda" + für-dich badge → NOT in audit log ✗
  3. "Oskar hat zugeordnet Konvolut Rose, Heft III" → METADATA_UPDATED ✓ (with an action label question, see below)
  4. "Lotte hat geantwortet auf deinem Kommentar" + für-dich badge → NOT in audit log ✗
  5. "Theo hat 4 Scans hochgeladen Mappe B" → FILE_UPLOADED

Three of five item types render correctly from the audit log. The "für dich" badge — the most emotionally engaging element in the entire dashboard — depends on items 2 and 4, which require DocumentComment data. The activity feed cannot be fully implemented from the audit log alone.

Until comments are added to the audit log (or the feed queries both sources), the activity feed renders without any personalized badge items. This is still worth shipping — it's not empty — but the "für Dich" experience that makes the dashboard feel like a family space is absent.

METADATA_UPDATED maps to an ambiguous action label.

The spec shows "Oskar hat zugeordnet" (assigned/segmented). METADATA_UPDATED fires on any metadata change — sender update, tag change, date change, receiver assignment. All of these look identical in the audit log. The feed would show "hat bearbeitet" for all of them, which is vague. Consider adding a subkind or more specific event kinds (DOCUMENT_ASSIGNED, TAG_UPDATED) if the feed needs to distinguish them. For now, "hat bearbeitet" is acceptable — but call this out in the i18n key so it's not a surprise.

BLOCK_REVIEWED is not in the spec's activity feed mockup.

The spec shows no "hat geprüft" activity feed entry, yet BLOCK_REVIEWED events will be in the audit log. Two options: (a) exclude BLOCK_REVIEWED from the activity feed query entirely (simpler, matches the spec), or (b) show it with "hat geprüft" label. For Pulse headline "Seiten bearbeitet", BLOCK_REVIEWED is also not included — only ANNOTATION_CREATED and TEXT_SAVED events contribute to page counts. Confirm this intent before writing the dashboard query.

I18n action label mapping needs to be defined before frontend work starts.

The dashboard DashboardActivityFeed.svelte component will receive kind: AuditKind and must render a German sentence. These keys don't exist in Paraglide yet. Define the mapping before writing any component:

AuditKind German action Paraglide key
TEXT_SAVED hat transkribiert audit_action_text_saved
FILE_UPLOADED hat hochgeladen audit_action_file_uploaded
ANNOTATION_CREATED hat markiert audit_action_annotation_created
BLOCK_REVIEWED hat geprüft audit_action_block_reviewed
METADATA_UPDATED hat bearbeitet audit_action_metadata_updated
STATUS_CHANGED — (not shown in feed)

Pulse contributor avatars need actor names, not just IDs.

The Pulse sidebar shows "6 Mitwirkende" with an avatar stack. The query will return distinct actor_id values from the week's audit events. The backend must JOIN to app_users to resolve initials and color. AppUser.color is still missing (noted in prior review, comment #3341) — this blocks avatar rendering in both Mission Control and Pulse.

Recommendations

  • The activity feed can ship without comment items — announce this scope reduction clearly so the "für dich" expectation is set correctly.
  • Define the audit_action_* i18n keys in de.json, en.json, es.json before writing DashboardActivityFeed.svelte.
  • Explicitly decide: does BLOCK_REVIEWED appear in the activity feed? Update the spec annotation in §6 with the decision.
  • The Pulse "Diese Woche · gemeinsam" label should accurately reflect what's being counted — if only ANNOTATION_CREATED + TEXT_SAVED contribute to "Seiten bearbeitet", the label is truthful; if BLOCK_REVIEWED is excluded from the count, make sure the headline still feels accurate to users who did review work that week.
## 🎨 Leonie Voss — Audit Log Impact on Dashboard UX ### Observations **Comment and mention activity is not covered by PR #275 — the "für dich" badge is blocked.** The spec's activity feed mockup (§6) shows five item types: 1. "Anna hat transkribiert Postkarte aus Wien" → covered by `TEXT_SAVED` ✓ 2. "Klaus hat dich erwähnt in Brief an Frieda" **+ für-dich badge** → NOT in audit log ✗ 3. "Oskar hat zugeordnet Konvolut Rose, Heft III" → `METADATA_UPDATED` ✓ (with an action label question, see below) 4. "Lotte hat geantwortet auf deinem Kommentar" **+ für-dich badge** → NOT in audit log ✗ 5. "Theo hat 4 Scans hochgeladen Mappe B" → `FILE_UPLOADED` ✓ Three of five item types render correctly from the audit log. The "für dich" badge — the most emotionally engaging element in the entire dashboard — depends on items 2 and 4, which require `DocumentComment` data. The activity feed cannot be fully implemented from the audit log alone. Until comments are added to the audit log (or the feed queries both sources), the activity feed renders without any personalized badge items. This is still worth shipping — it's not empty — but the "für Dich" experience that makes the dashboard feel like a family space is absent. **`METADATA_UPDATED` maps to an ambiguous action label.** The spec shows "Oskar hat zugeordnet" (assigned/segmented). `METADATA_UPDATED` fires on any metadata change — sender update, tag change, date change, receiver assignment. All of these look identical in the audit log. The feed would show "hat bearbeitet" for all of them, which is vague. Consider adding a `subkind` or more specific event kinds (`DOCUMENT_ASSIGNED`, `TAG_UPDATED`) if the feed needs to distinguish them. For now, "hat bearbeitet" is acceptable — but call this out in the i18n key so it's not a surprise. **`BLOCK_REVIEWED` is not in the spec's activity feed mockup.** The spec shows no "hat geprüft" activity feed entry, yet `BLOCK_REVIEWED` events will be in the audit log. Two options: (a) exclude `BLOCK_REVIEWED` from the activity feed query entirely (simpler, matches the spec), or (b) show it with "hat geprüft" label. For Pulse headline "Seiten bearbeitet", `BLOCK_REVIEWED` is also not included — only `ANNOTATION_CREATED` and `TEXT_SAVED` events contribute to page counts. Confirm this intent before writing the dashboard query. **I18n action label mapping needs to be defined before frontend work starts.** The dashboard `DashboardActivityFeed.svelte` component will receive `kind: AuditKind` and must render a German sentence. These keys don't exist in Paraglide yet. Define the mapping before writing any component: | `AuditKind` | German action | Paraglide key | |---|---|---| | `TEXT_SAVED` | hat transkribiert | `audit_action_text_saved` | | `FILE_UPLOADED` | hat hochgeladen | `audit_action_file_uploaded` | | `ANNOTATION_CREATED` | hat markiert | `audit_action_annotation_created` | | `BLOCK_REVIEWED` | hat geprüft | `audit_action_block_reviewed` | | `METADATA_UPDATED` | hat bearbeitet | `audit_action_metadata_updated` | | `STATUS_CHANGED` | — (not shown in feed) | — | **Pulse contributor avatars need actor names, not just IDs.** The Pulse sidebar shows "6 Mitwirkende" with an avatar stack. The query will return distinct `actor_id` values from the week's audit events. The backend must JOIN to `app_users` to resolve initials and color. `AppUser.color` is still missing (noted in prior review, comment #3341) — this blocks avatar rendering in both Mission Control *and* Pulse. ### Recommendations - The activity feed can ship without comment items — announce this scope reduction clearly so the "für dich" expectation is set correctly. - Define the `audit_action_*` i18n keys in `de.json`, `en.json`, `es.json` before writing `DashboardActivityFeed.svelte`. - Explicitly decide: does `BLOCK_REVIEWED` appear in the activity feed? Update the spec annotation in §6 with the decision. - The Pulse "Diese Woche · gemeinsam" label should accurately reflect what's being counted — if only `ANNOTATION_CREATED` + `TEXT_SAVED` contribute to "Seiten bearbeitet", the label is truthful; if `BLOCK_REVIEWED` is excluded from the count, make sure the headline still feels accurate to users who did review work that week.
Author
Owner

⚙️ Tobias Wendt — Audit Log Infrastructure Impact on Dashboard

Observations

Cleanest possible infrastructure impact: zero.
No new services, no new volumes, no compose changes. The audit log is a table in the existing PostgreSQL instance. Backup, restore, and monitoring all inherit automatically.

The four indexes match the expected query patterns.

  • idx_audit_log_happened_at DESC — period scoping (Pulse WHERE happened_at >= week_start)
  • idx_audit_log_actor_id — resume card (WHERE actor_id = :userId)
  • idx_audit_log_kind — event-type filtering
  • idx_audit_log_document_id — document-scoped activity

For a family archive generating <100 events/day, these are more than sufficient. No composite indexes needed at current scale.

The dedup query's date_trunc('hour', happened_at) is not indexed — this is fine now, worth noting for later.

The resolved activity dedup: DISTINCT ON (actor_id, document_id, kind, date_trunc('hour', happened_at)). The date_trunc expression prevents index use on happened_at. At 1,000 rows/month this is a non-issue — the index on happened_at DESC bounds the scan to the relevant period, and the dedup is cheap on a small result set. If the archive grows to 10,000+ events, a partial index on (actor_id, document_id, kind, date_trunc('hour', happened_at)) would help. File this as a future optimization, not a blocker.

REVOKE UPDATE, DELETE in V46 — confirm migration role is separate from app_user.

The migration runs as the database owner (configured via spring.datasource.username in application.yaml). If that username matches the role targeted by REVOKE ... FROM app_user, the REVOKE targets the correct role. But looking at the current Docker Compose, POSTGRES_USER is used for both Flyway migrations and the running application. If both are the same role, REVOKE UPDATE, DELETE ON audit_log FROM app_user REVOKES the running app's own write permission.

Check backend/src/main/resources/application.yaml:

  • If spring.datasource.username = app_user AND Flyway also runs as app_user, then the migration user equals the revoked role. Flyway (running as app_user) cannot grant or revoke its own permissions — the REVOKE will fail.
  • If Flyway runs as a superuser/owner and app_user is a separate application role, V46 is correct.

Verify this before merging. The CI integration tests would catch a failing migration but only if Testcontainers uses the same role configuration as production.

auditExecutor thread pool is configured but executes zero tasks.

All six logAfterCommit() call sites run the writeLog() synchronously in the afterCommit() callback — outside the main transaction, but in the same request thread. The auditExecutor bean (1 core, 2 max, 50 queue, CallerRunsPolicy) allocates a thread pool that handles zero tasks. It costs ~1MB overhead. Either document the intended use case for @Async log() or remove both the log() method and the auditExecutor bean before more code accumulates around this confusion.

Audit log size projection.
At steady-state family-archive usage: ~20 TEXT_SAVED + 5 ANNOTATION_CREATED + 3 FILE_UPLOADED per active day. Each row is ~200 bytes (UUID × 3 + timestamp + varchar + JSONB). 30 events/day × 365 days = ~10,950 rows/year = ~2.2MB/year. Including JSONB overhead and indexes: ~10MB/year. Negligible. No partitioning, no archival policy needed for the foreseeable future.

Recommendations

  • Verify the Flyway migration user vs. app_user role separation before merging V46. If they're the same role, the REVOKE needs to run as a superuser in a separate init script.
  • Remove the auditExecutor thread pool and @Async log() method, or add a code comment explaining the intended division: when should callers use log() vs. logAfterCommit(). Right now there's no documented contract.
  • No other infrastructure changes needed — the audit log ships cleanly on existing stack.
## ⚙️ Tobias Wendt — Audit Log Infrastructure Impact on Dashboard ### Observations **Cleanest possible infrastructure impact: zero.** No new services, no new volumes, no compose changes. The audit log is a table in the existing PostgreSQL instance. Backup, restore, and monitoring all inherit automatically. **The four indexes match the expected query patterns.** - `idx_audit_log_happened_at DESC` — period scoping (Pulse `WHERE happened_at >= week_start`) - `idx_audit_log_actor_id` — resume card (`WHERE actor_id = :userId`) - `idx_audit_log_kind` — event-type filtering - `idx_audit_log_document_id` — document-scoped activity For a family archive generating <100 events/day, these are more than sufficient. No composite indexes needed at current scale. **The dedup query's `date_trunc('hour', happened_at)` is not indexed — this is fine now, worth noting for later.** The resolved activity dedup: `DISTINCT ON (actor_id, document_id, kind, date_trunc('hour', happened_at))`. The `date_trunc` expression prevents index use on `happened_at`. At 1,000 rows/month this is a non-issue — the index on `happened_at DESC` bounds the scan to the relevant period, and the dedup is cheap on a small result set. If the archive grows to 10,000+ events, a partial index on `(actor_id, document_id, kind, date_trunc('hour', happened_at))` would help. File this as a future optimization, not a blocker. **REVOKE UPDATE, DELETE in V46 — confirm migration role is separate from `app_user`.** The migration runs as the database owner (configured via `spring.datasource.username` in `application.yaml`). If that username matches the role targeted by `REVOKE ... FROM app_user`, the REVOKE targets the correct role. But looking at the current Docker Compose, `POSTGRES_USER` is used for both Flyway migrations and the running application. If both are the same role, `REVOKE UPDATE, DELETE ON audit_log FROM app_user` REVOKES the running app's own write permission. Check `backend/src/main/resources/application.yaml`: - If `spring.datasource.username = app_user` AND Flyway also runs as `app_user`, then the migration user equals the revoked role. Flyway (running as `app_user`) cannot grant or revoke its own permissions — the `REVOKE` will fail. - If Flyway runs as a superuser/owner and `app_user` is a separate application role, V46 is correct. Verify this before merging. The CI integration tests would catch a failing migration but only if Testcontainers uses the same role configuration as production. **`auditExecutor` thread pool is configured but executes zero tasks.** All six `logAfterCommit()` call sites run the `writeLog()` synchronously in the `afterCommit()` callback — outside the main transaction, but in the same request thread. The `auditExecutor` bean (1 core, 2 max, 50 queue, `CallerRunsPolicy`) allocates a thread pool that handles zero tasks. It costs ~1MB overhead. Either document the intended use case for `@Async log()` or remove both the `log()` method and the `auditExecutor` bean before more code accumulates around this confusion. **Audit log size projection.** At steady-state family-archive usage: ~20 `TEXT_SAVED` + 5 `ANNOTATION_CREATED` + 3 `FILE_UPLOADED` per active day. Each row is ~200 bytes (UUID × 3 + timestamp + varchar + JSONB). 30 events/day × 365 days = ~10,950 rows/year = ~2.2MB/year. Including JSONB overhead and indexes: ~10MB/year. Negligible. No partitioning, no archival policy needed for the foreseeable future. ### Recommendations - Verify the Flyway migration user vs. `app_user` role separation before merging V46. If they're the same role, the `REVOKE` needs to run as a superuser in a separate init script. - Remove the `auditExecutor` thread pool and `@Async log()` method, or add a code comment explaining the intended division: when should callers use `log()` vs. `logAfterCommit()`. Right now there's no documented contract. - No other infrastructure changes needed — the audit log ships cleanly on existing stack.
Author
Owner

🗳️ Decision Queue — Audit Log / Dashboard Impact Review

3 decisions need your input before dashboard backend implementation starts.

Architecture

  • Comment events in audit log — The activity feed's "für dich" @mention and comment-reply items (2 of 5 feed types in the spec) are not covered by PR #275. The dashboard controller needs a data source for them.

    • Option A — Add COMMENT_ADDED/MENTION_CREATED to audit_log: One unified data source for all feed items. DashboardController runs a single query. Requires expanding PR #275 or a follow-up before dashboard backend starts.
    • Option B — Query document_comments separately: DashboardController merges two result sets (audit events + comment rows). More complex join. youMentioned flag logic in NotificationController would need to be shared.
      (Raised by: Markus, Leonie)
  • Null actor in activity feed — GDPR ON DELETE SET NULL means deleted users' actor_id becomes null. When the activity feed renders, null actors must be handled at the query level.

    • Option A — INNER JOIN app_users: Rows from deleted users are silently excluded from the feed. Feed stays clean, data is lost.
    • Option B — LEFT JOIN app_users, actor nullable in DTO: Feed shows the activity ("hat transkribiert Postkarte aus Wien") without an avatar/name. Frontend must handle actor: null gracefully.
      (Raised by: Nora)

UX

  • BLOCK_REVIEWED in activity feed — The spec's §6 mockup shows no "hat geprüft" feed entry, but BLOCK_REVIEWED events exist in the audit log. Does the activity feed query include or exclude this event kind? (It is also excluded from the Pulse headline page count per comment #3354.)
    • Option A — Exclude: Activity feed only shows TEXT_SAVED, FILE_UPLOADED, ANNOTATION_CREATED. Simpler query, matches the spec mockup as drawn.
    • Option B — Include: Adds "hat geprüft" entries. Reviewers see their work acknowledged. Requires a Paraglide key and the spec annotation in §6 to be updated.
      (Raised by: Leonie)

Also flagged for verification (not a decision, but needs checking before PR #275 merges): Tobias identified that the REVOKE UPDATE, DELETE ... FROM app_user in V46 may target the wrong role if Flyway and the application share the same DB user. Verify spring.datasource.username vs. the app_user role in docker-compose.yml before merging.

## 🗳️ Decision Queue — Audit Log / Dashboard Impact Review _3 decisions need your input before dashboard backend implementation starts._ ### Architecture - **Comment events in audit log** — The activity feed's "für dich" @mention and comment-reply items (2 of 5 feed types in the spec) are not covered by PR #275. The dashboard controller needs a data source for them. - **Option A — Add `COMMENT_ADDED`/`MENTION_CREATED` to `audit_log`:** One unified data source for all feed items. `DashboardController` runs a single query. Requires expanding PR #275 or a follow-up before dashboard backend starts. - **Option B — Query `document_comments` separately:** `DashboardController` merges two result sets (audit events + comment rows). More complex join. `youMentioned` flag logic in `NotificationController` would need to be shared. _(Raised by: Markus, Leonie)_ - **Null actor in activity feed** — GDPR `ON DELETE SET NULL` means deleted users' `actor_id` becomes null. When the activity feed renders, null actors must be handled at the query level. - **Option A — INNER JOIN app_users:** Rows from deleted users are silently excluded from the feed. Feed stays clean, data is lost. - **Option B — LEFT JOIN app_users, actor nullable in DTO:** Feed shows the activity ("hat transkribiert Postkarte aus Wien") without an avatar/name. Frontend must handle `actor: null` gracefully. _(Raised by: Nora)_ ### UX - **`BLOCK_REVIEWED` in activity feed** — The spec's §6 mockup shows no "hat geprüft" feed entry, but `BLOCK_REVIEWED` events exist in the audit log. Does the activity feed query include or exclude this event kind? (It is also excluded from the Pulse headline page count per comment #3354.) - **Option A — Exclude:** Activity feed only shows `TEXT_SAVED`, `FILE_UPLOADED`, `ANNOTATION_CREATED`. Simpler query, matches the spec mockup as drawn. - **Option B — Include:** Adds "hat geprüft" entries. Reviewers see their work acknowledged. Requires a Paraglide key and the spec annotation in §6 to be updated. _(Raised by: Leonie)_ --- _Also flagged for verification (not a decision, but needs checking before PR #275 merges): Tobias identified that the `REVOKE UPDATE, DELETE ... FROM app_user` in V46 may target the wrong role if Flyway and the application share the same DB user. Verify `spring.datasource.username` vs. the `app_user` role in `docker-compose.yml` before merging._
Author
Owner

🗳️ Decision Queue — Resolved

All three open decisions from the last review cycle are settled. Summary below.

Comment events in audit log → Option A

Add COMMENT_ADDED and MENTION_CREATED to audit_log before dashboard backend work starts. One unified data source for all feed items. Details and implementation notes posted directly on PR #275.

Null actor in activity feed → Option B (LEFT JOIN)

Use a LEFT JOIN to app_users. Rows from deleted users remain in the feed — the activity happened and the record should reflect it. Actor is nullable in the DTO; frontend renders a grey anonymous avatar when actor === null. Nora's recommended DTO shape stands:

record ActivityActor(String initials, String color) {}
record ActivityFeedItem(AuditKind kind, @Nullable ActivityActor actor,
                        UUID documentId, String documentTitle, OffsetDateTime happenedAt) {}

BLOCK_REVIEWED in activity feed → Option A (exclude)

Activity feed only shows TEXT_SAVED, FILE_UPLOADED, ANNOTATION_CREATED, COMMENT_ADDED, MENTION_CREATED. Review work is high-frequency and fine-grained; including it would make the feed noisy. Consistent with its exclusion from the Pulse headline count. No new Paraglide key needed.


🐛 Migration bug in V46 — fix before merging PR #275

Tobias flagged that REVOKE UPDATE, DELETE ON audit_log FROM app_user may target the wrong role if Flyway and the application share the same DB user. Verified: the actual DB role is archive_user (set via POSTGRES_USER in .env). The migration hardcodes app_user, which does not exist — the REVOKE will fail at migration time.

Fix: replace the hardcoded role name with CURRENT_USER:

-- Before
REVOKE UPDATE, DELETE ON audit_log FROM app_user;

-- After
REVOKE UPDATE, DELETE ON audit_log FROM CURRENT_USER;

CURRENT_USER resolves to whichever role runs the migration (same role the application uses), making the append-only enforcement role-agnostic and correct regardless of environment.

## 🗳️ Decision Queue — Resolved All three open decisions from the last review cycle are settled. Summary below. ### ✅ Comment events in audit log → Option A Add `COMMENT_ADDED` and `MENTION_CREATED` to `audit_log` before dashboard backend work starts. One unified data source for all feed items. Details and implementation notes posted directly on PR #275. ### ✅ Null actor in activity feed → Option B (LEFT JOIN) Use a LEFT JOIN to `app_users`. Rows from deleted users remain in the feed — the activity happened and the record should reflect it. Actor is nullable in the DTO; frontend renders a grey anonymous avatar when `actor === null`. Nora's recommended DTO shape stands: ```java record ActivityActor(String initials, String color) {} record ActivityFeedItem(AuditKind kind, @Nullable ActivityActor actor, UUID documentId, String documentTitle, OffsetDateTime happenedAt) {} ``` ### ✅ `BLOCK_REVIEWED` in activity feed → Option A (exclude) Activity feed only shows `TEXT_SAVED`, `FILE_UPLOADED`, `ANNOTATION_CREATED`, `COMMENT_ADDED`, `MENTION_CREATED`. Review work is high-frequency and fine-grained; including it would make the feed noisy. Consistent with its exclusion from the Pulse headline count. No new Paraglide key needed. --- ### 🐛 Migration bug in V46 — fix before merging PR #275 Tobias flagged that `REVOKE UPDATE, DELETE ON audit_log FROM app_user` may target the wrong role if Flyway and the application share the same DB user. Verified: the actual DB role is `archive_user` (set via `POSTGRES_USER` in `.env`). The migration hardcodes `app_user`, which does not exist — the `REVOKE` will fail at migration time. **Fix:** replace the hardcoded role name with `CURRENT_USER`: ```sql -- Before REVOKE UPDATE, DELETE ON audit_log FROM app_user; -- After REVOKE UPDATE, DELETE ON audit_log FROM CURRENT_USER; ``` `CURRENT_USER` resolves to whichever role runs the migration (same role the application uses), making the append-only enforcement role-agnostic and correct regardless of environment.
Author
Owner

Implementation complete

Branch: feat/issue-271-dashboard-redesign

What was implemented

Backend (8 commits):

  • V47__add_user_color.sql — adds color column to users table; also fixes V46's broken REFERENCES app_usersREFERENCES users and REVOKE FROM app_userFROM CURRENT_USER
  • AppUser.color — deterministic palette color derived from user ID via @PrePersist/@PostLoad
  • TranscriptionServiceTEXT_SAVED audit payload now includes blockId
  • SecurityUtils.requireUserId() — extracted shared helper; both DocumentController and TranscriptionBlockController now delegate to it
  • dashboard/ package — DashboardController (3 endpoints), DashboardService, AuditLogQueryService, AuditLogQueryRepository (native PostgreSQL queries with DISTINCT ON, JSONB operators), all DTOs (DashboardResumeDTO, DashboardPulseDTO, ActivityFeedItemDTO, ActivityActorDTO)
  • GET /api/documents/incomplete and GET /api/documents/recent-activity removed

Frontend (2 commits):

  • Upload button added to global header (authenticated users only), routes to /documents/new
  • +page.server.ts — calls /api/dashboard/resume, /api/dashboard/pulse, /api/dashboard/activity; removes deprecated endpoints
  • +page.svelte — 2-column layout (1fr 320px), personalised greeting (morning/day/evening), DashboardResumeStrip + MissionControlStrip in main column; DashboardFamilyPulse + DashboardActivityFeed + DropZone in sticky sidebar
  • DashboardResumeStrip — full rewrite: SVG parchment thumbnail, pull-quote, ARIA progress bar, collaborator avatar stack, empty state
  • DashboardFamilyPulse — new component: weekly page count headline, contributor avatar stack, 3-stat grid
  • DashboardActivityFeed — new component: activity feed with "für dich" badge for @mentions
  • i18n keys added to de/en/es

Tests: 1159 backend tests green · 932 frontend tests green

## Implementation complete ✅ Branch: `feat/issue-271-dashboard-redesign` ### What was implemented **Backend (8 commits):** - `V47__add_user_color.sql` — adds `color` column to `users` table; also fixes V46's broken `REFERENCES app_users` → `REFERENCES users` and `REVOKE FROM app_user` → `FROM CURRENT_USER` - `AppUser.color` — deterministic palette color derived from user ID via `@PrePersist/@PostLoad` - `TranscriptionService` — `TEXT_SAVED` audit payload now includes `blockId` - `SecurityUtils.requireUserId()` — extracted shared helper; both `DocumentController` and `TranscriptionBlockController` now delegate to it - `dashboard/` package — `DashboardController` (3 endpoints), `DashboardService`, `AuditLogQueryService`, `AuditLogQueryRepository` (native PostgreSQL queries with `DISTINCT ON`, JSONB operators), all DTOs (`DashboardResumeDTO`, `DashboardPulseDTO`, `ActivityFeedItemDTO`, `ActivityActorDTO`) - `GET /api/documents/incomplete` and `GET /api/documents/recent-activity` removed **Frontend (2 commits):** - Upload button added to global header (authenticated users only), routes to `/documents/new` - `+page.server.ts` — calls `/api/dashboard/resume`, `/api/dashboard/pulse`, `/api/dashboard/activity`; removes deprecated endpoints - `+page.svelte` — 2-column layout (`1fr 320px`), personalised greeting (morning/day/evening), `DashboardResumeStrip` + `MissionControlStrip` in main column; `DashboardFamilyPulse` + `DashboardActivityFeed` + `DropZone` in sticky sidebar - `DashboardResumeStrip` — full rewrite: SVG parchment thumbnail, pull-quote, ARIA progress bar, collaborator avatar stack, empty state - `DashboardFamilyPulse` — new component: weekly page count headline, contributor avatar stack, 3-stat grid - `DashboardActivityFeed` — new component: activity feed with "für dich" badge for `@mentions` - i18n keys added to de/en/es **Tests:** 1159 backend tests green · 932 frontend tests green
Sign in to join this conversation.
No Label feature ui
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: marcel/familienarchiv#271