- Replace hasTags(List<String>) spec with hasTags(List<Set<UUID>>, useOr)
- AND mode: one EXISTS subquery per expanded tag ID set; empty set = disjunction
- OR mode: union of all expanded sets into a single EXISTS subquery
- DocumentService calls tagService.expandTagNamesToDescendantIdSets() before building spec
- DocumentController exposes ?tagOp=AND|OR query param (default AND)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds parent_id FK (ON DELETE SET NULL), self-reference check constraint,
parent_id index, and nullable color column to the tag table.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The Lesefertig pulse was removed from the UI; drop the backend support
for it too — removes the subquery from findWeeklyStats(), the projection
getter, the DTO field, and updates all affected tests.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The original needsExpert V37 migration was applied to the dev DB before
the feature was removed. Renaming our new indexes migration to V38 avoids
the Flyway checksum conflict. Regenerated api.ts now reflects the
@Schema(requiredMode=REQUIRED) annotations — DTO fields are non-optional.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
All non-null DTO fields are now marked required so the generated api.ts
emits required (non-optional) types for callers. V37 migration adds
created_at/updated_at indexes on document_annotations and transcription_blocks
to avoid full table scans in the weekly stats correlated subqueries.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Verifies 401/403/200 responses for all four endpoints. Matches
the @WebMvcTest + @RequirePermission pattern used across the project.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Introduces TranscriptionQueueProjection and TranscriptionWeeklyStatsProjection
interfaces so column reordering in native SQL can never silently produce wrong
data. Removes the four type-coercion helpers (toUUID, toLocalDate, toInt, toLong)
from TranscriptionQueueService. Covered by TranscriptionQueueServiceTest (6 tests).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two backend tests passed a 6-element enrichment row but the rebase
added summary_snippet as column 7 — added null at index 6 to both
fixtures.
Two frontend page.server tests mocked only 4 dashboard API calls but
the page now makes 8 (3 Mission Control queues + weekly-stats added
on this branch) — added the 4 missing mock responses.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The ready-to-read and transcription queue queries were dividing
reviewed blocks by textedBlockCount instead of annotationCount.
A document with 4/15 annotations typed — all 4 reviewed — scored
4/4 = 100 % and incorrectly appeared in the Lesefertig column.
Both queries now compute the ratio as:
reviewed / annotationCount
so a document must have ≥ 90 % of all its drawn regions reviewed
before it graduates to Lesefertig.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Drops the needsExpert / needs_expert flag end-to-end: DB migration
(V37, never applied), Document entity field, PATCH endpoint, service
method, DTO field, all three queue queries, ExpertBadge component,
i18n key, generated API types, and test fixture.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
V36 (add_index_transcription_blocks_document_id) was applied to the dev
database during a previous local session but never committed to git.
Flyway checksum mismatch prevented the backend from starting.
- V36__add_index_transcription_blocks_document_id.sql: restored from the
index that already exists in the database (idx_transcription_blocks_document_id)
- V36__add_needs_expert_to_documents.sql → V37__add_needs_expert_to_documents.sql
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add a summary_snippet column to findEnrichmentData using ts_headline on
documents.summary, only when the summary's tsvector matches the query.
Expose it via SearchMatchData.summarySnippet / summaryOffsets and render
a "Zusammenfassung" / "Summary" / "Resumen" labelled row in the document
list — identical treatment to the transcription snippet row.
Fixes the case where a document appeared in search results with no
visible match explanation (e.g. searching "frucht" found a document
whose summary mentioned "Früchte").
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace websearch_to_tsquery with a CROSS JOIN LATERAL subquery that
appends :* to each lexeme so prefix matches work (e.g. "furchtb" finds
"furchtbar"). websearch_to_tsquery still handles the safe tokenisation
of user input (stop words, special chars, operators); regexp_replace
then adds :* before to_tsquery re-parses the result.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- SearchMatchData gains a 6th field snippetOffsets: List<MatchOffset> so the frontend
can render highlighted terms inside the transcription snippet without {#html}.
- DocumentRepository.findEnrichmentData now calls ts_headline() with chr(1)/chr(2)
sentinels instead of returning raw block text; parseHighlight() strips the sentinels
and produces clean text + MatchOffset list in one pass.
- DocumentService exposes ParsedHighlight and parseHighlight() as public so they can be
called from cross-package integration tests.
- All related tests updated to the new 6-argument SearchMatchData constructor and
to call parseHighlight() for asserting the snippet clean text and offsets.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DocumentService.searchDocuments now returns DocumentSearchResult with matchData
populated from findEnrichmentData. Title highlights are parsed from chr(1)/chr(2)
delimiters into MatchOffset lists; transcription snippet and sender/receiver/tag
match flags are extracted from the same native SQL row.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- should_find_document_by_sender_name — symmetric with existing receiver test
- fts_combined_with_status_filter_excludes_non_matching_status — verifies
hasIds(rankedIds).and(hasStatus(...)) two-phase search works together
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ids.indexOf() scans the full list for each document, giving O(n²) total.
Build a Map<UUID, Integer> once at O(n) and use getOrDefault at O(1) per
document. Behavior is identical; existing tests remain green.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When a user explicitly selects DATE sort with a text query active, the
previous code treated it identically to RELEVANCE, silently discarding
the user's sort choice. Remove DATE from the useRankOrder condition so
that explicit DATE sort always goes through the standard JPA sort path.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fires the BEFORE UPDATE trigger for every documents row, which recomputes
the tsvector from all currently-linked metadata, blocks, receivers, and tags.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- V34 migration: adds search_vector tsvector column with GIN index
- BEFORE INSERT/UPDATE trigger on documents rebuilds vector from title (A),
summary + transcription_blocks.text (B), sender/receiver names (C),
tag names + location (D) using german FTS config
- AFTER triggers on transcription_blocks, document_receivers, document_tags
touch the parent document row to re-fire the BEFORE UPDATE trigger
- DocumentRepository.findRankedIdsByFts() native query using websearch_to_tsquery
- DocumentFtsTest: 12 integration tests covering stemming, trigger sync,
ranking, stop words, malformed input, receiver and tag search
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
setCer() was called for recognition training but not for segmentation.
The OCR service now returns cer = 1 - accuracy for segtrain; persist it
so the admin panel can display Fehlerrate for both training types.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Removed duplicate import of org.mockito.ArgumentMatchers.eq from
DocumentControllerTest (lines 32+35). Added @ApiResponse(responseCode="204")
to patchTrainingLabel so the generated OpenAPI spec matches the actual
NoContent response the controller returns.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
OcrTrainingService.triggerTraining() and triggerSegTraining() held a DB
connection open for the entire ketos training run (potentially minutes),
risking connection pool exhaustion. Replaced class-level @Transactional
with TransactionTemplate for narrow DB writes: guard+create and
result-record each run in their own short transaction; the HTTP call to
the OCR service runs between them with no open connection.
Also replaces blockRepository.findAll().size() with blockRepository.count()
in getTrainingInfo() to avoid loading every block into heap on each poll.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
V23 introduced a JSONB check constraint (chk_annotation_polygon_quad)
requiring polygon arrays to have exactly 4 points. V30 introduced a
partial unique index preventing two concurrent RUNNING training runs.
These are DB-level invariants that unit tests cannot verify — five
Testcontainers tests now assert they are correctly applied by Flyway
and enforced by PostgreSQL.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Training reloads the Kraken model in-process on the Python service.
The DB-level RUNNING constraint prevents concurrent API calls but
cannot protect against multi-replica deployments. Added explicit
comments in docker-compose.yml and OcrTrainingService to prevent
accidental horizontal scaling. See ADR-001.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>