Extends the 4-corner L-bracket handles with 4 tick-mark edge handles
(short lines along each edge), enabling single-axis resize from any edge.
Updates applyHandleDrag to route each handle to the correct axis.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- 4 corner-only handles (nw/ne/sw/se), no edge midpoints
- Each handle renders as two short perpendicular lines meeting at the corner
(10px arms, navy, square linecap) — no fill, no box
- Thin dashed selection border added to SVG overlay to signal edit mode
- Simplify applyHandleDrag to remove dead n/s/e/w branches
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ResizeObserver binds actual SVG pixel dimensions; viewBox matches them so
16px handle squares and 44px hit areas are physically correct regardless of
the annotation's aspect ratio.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Commit 5afdc37 changed the empty state from transcription_empty_cta
('Markiere einen Bereich…') to transcription_empty_draw_hint
('Zeichnen Sie Bereiche…') but left the spec asserting the old text.
Updated the locator to match the current component output.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Removed duplicate import of org.mockito.ArgumentMatchers.eq from
DocumentControllerTest (lines 32+35). Added @ApiResponse(responseCode="204")
to patchTrainingLabel so the generated OpenAPI spec matches the actual
NoContent response the controller returns.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
WCAG 1.4.1 (Use of Color) requires non-color redundant cues for status.
The unicode ✓/✗ characters had inconsistent screen-reader support.
Replaced with explicit aria-hidden SVG icons (checkmark / x-circle)
alongside the translated status text labels.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
TranscriptionEditView rendered 'Kurrent-Erkennung' and 'Segmentierung'
as hardcoded German strings, breaking the en/es locales. Added
training_chip_kurrent and training_chip_segmentation keys to all three
message files and wired them up via m.training_chip_kurrent() /
m.training_chip_segmentation().
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_check_training_token previously skipped auth when TRAINING_TOKEN was
empty, allowing unauthenticated requests to reach /train and /segtrain.
Now returns 503 ("Training not configured on this node") when the token
is absent, so missing configuration fails closed rather than open.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
OcrTrainingService.triggerTraining() and triggerSegTraining() held a DB
connection open for the entire ketos training run (potentially minutes),
risking connection pool exhaustion. Replaced class-level @Transactional
with TransactionTemplate for narrow DB writes: guard+create and
result-record each run in their own short transaction; the HTTP call to
the OCR service runs between them with no open connection.
Also replaces blockRepository.findAll().size() with blockRepository.count()
in getTrainingInfo() to avoid loading every block into heap on each poll.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two workers × ~5 GB Surya model load = ~10 GB required, exceeding the
8 GB memory cap and causing OOM on the first /train call. Two OS
processes also cause model-state divergence after training, contradicting
the single-node constraint documented in ADR-001.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
V23 introduced a JSONB check constraint (chk_annotation_polygon_quad)
requiring polygon arrays to have exactly 4 points. V30 introduced a
partial unique index preventing two concurrent RUNNING training runs.
These are DB-level invariants that unit tests cannot verify — five
Testcontainers tests now assert they are correctly applied by Flyway
and enforced by PostgreSQL.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fresh cloners had no tracked reference for required env vars.
.env is gitignored (contains real credentials). .env.example
documents all variables including the new OCR_TRAINING_TOKEN
for the Python OCR microservice training endpoints.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Screen readers did not announce page-by-page OCR progress updates.
Wrapping the counter text in a span with aria-live=polite ensures
assistive technology announces each page completion without
interrupting the user.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Training reloads the Kraken model in-process on the Python service.
The DB-level RUNNING constraint prevents concurrent API calls but
cannot protect against multi-replica deployments. Added explicit
comments in docker-compose.yml and OcrTrainingService to prevent
accidental horizontal scaling. See ADR-001.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
A 100-page document at ~10 s/page takes ~17 min on CPU-only hardware,
which could cause the presigned URL to expire mid-OCR job. 1 hour gives
ample headroom for any realistic document size in this archive.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The cascade-delete commit (5a5a8b6) added blockRepository.deleteByAnnotationId()
to AnnotationService.deleteAnnotation(), but the test class was not updated to
mock TranscriptionBlockRepository. Mockito injected null, causing deleteAnnotation_succeeds_whenOwner
to throw NPE. Adds the mock, verifies the cascade call, and adds an inOrder test
asserting the block is deleted before the annotation.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The DELETE endpoint was returning 500 due to a FK constraint violation.
`deleteAnnotation` now calls `blockRepository.deleteByAnnotationId()`
before removing the annotation.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Drawing annotations is now the primary workflow. OCR only runs on
manually drawn regions (guided mode always). Full-page layout detection
and the useExistingAnnotations checkbox are removed entirely.
- OcrTrigger: guided-only, disabled with hint when no annotations exist
- TranscriptionEditView: empty state shows draw-regions instruction,
OCR trigger moved out of collapsible and shown inline after block list
- i18n: add ocr_trigger_no_annotations, ocr_section_heading,
transcription_empty_draw_hint; remove ocr_use_existing_annotations keys
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ketos 7 defaults to safetensors output, but kraken's load_any() only
handles CoreML (.mlmodel). Adding --weights-format coreml ensures the
hot-swap after training produces a file that load_any() can parse.
Also fixed _find_best_model to look for best_<score>.mlmodel (produced
by --weights-format coreml) in addition to the previous checkpoint_*
pattern.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Kraken 7 removed support for the legacy `path` format (image + .gt.txt
pairs) in VGSLRecognitionDataModule despite the CLI still advertising it.
Switching to PAGE XML (-f page) format which is the supported standard.
- Java export now writes .xml alongside .png (PAGE XML with TextLine,
Baseline at 75% height, and Unicode transcription)
- XML special characters in transcription text are escaped (& < >)
- Python trainer globs *.xml and passes -f page to ketos train
- Regenerated frontend API types to include cer/loss/accuracy/epochs on
OcrTrainingRun (were missing, causing empty CER column in history)
- Updated and extended TrainingDataExportServiceTest
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ketos segtrain has no batch-size flag (-B), so with the default 1800px
input height the intermediate CNN feature maps consume ~500 MB+ per
image, causing the kernel OOM-killer (exit -9) to terminate the process.
On first run (no existing blla.mlmodel), override the VGSL spec to use
800px height instead. Subsequent runs load the saved model with
--resize both, preserving incremental fine-tuning.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Force CPU-only training (--device cpu), cap OpenMP/BLAS thread pool at 2
(--threads 2), and reduce epochs from 50 to 10 (-N 10). 50 epochs on a
laptop OOM-killed the container. 10 epochs is sufficient for incremental
fine-tuning runs; more data is added over time and training re-run.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DataLoader worker subprocesses crash inside Docker due to multiprocessing
fork restrictions. Pass --workers 0 to both ketos train and ketos segtrain
so data loading runs in the main process.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
kraken.ketos has no .train or .segtrain attributes in Kraken 7 — both are
only exposed as CLI commands. Rewrites both training functions to invoke
`ketos train` / `ketos segtrain` via subprocess and parse the best
val_metric from checkpoint filenames.
Also fixes the OcrTrainingCard history so it only shows non-blla runs
(recognition model), matching SegmentationTrainingCard which already
filtered to blla-only.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
After each training run, the Character Error Rate (CER = 1 - accuracy),
loss, accuracy, and epoch count are now stored on the OcrTrainingRun
record and shown in the training history table.
Also adds the missing POST /api/ocr/segtrain endpoint and the
triggerSegTraining service method so the segmentation training card
can actually trigger training.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously all MANUAL blocks counted as eligible training data, even ones
where text was filled in by guided OCR but never explicitly reviewed. This
caused segmentation and recognition counts to always match.
Now only reviewed=true blocks qualify for recognition training, so the
counts properly reflect: segments = all drawn annotation boxes,
checked text = only boxes where the user has verified the transcription.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The parent was manually remapping availableSegBlocks → availableBlocks
before passing props, which broke after the card was updated to read
availableSegBlocks directly. Pass the full trainingInfo object instead.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Both cards were reading the same availableBlocks field, so the segmentation
box always showed the kurrent recognition count. Use the correct
availableSegBlocks field from the training info response.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The findSegmentationBlocks query was filtering out blocks with non-empty
text. Segmentation training only needs annotation geometry (polygon/bbox),
not transcription text — so any MANUAL block on a KURRENT_SEGMENTATION
document should count, regardless of whether it has text.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Historical letter lines often intersect, so the system must support
overlapping annotation regions. Removed the overlap guard from
createAnnotation(), deleted ErrorCode.ANNOTATION_OVERLAP, and cleaned
up all tests and frontend error mappings that referenced it.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When a user draws annotation boxes to mark OCR regions, the blocks are
created with source=MANUAL and empty text. upsertGuidedBlock was
protecting all MANUAL blocks unconditionally, so guided OCR silently
produced no output for these drawn-but-empty blocks.
Changed the guard to only protect non-empty MANUAL blocks — empty ones
are treated like OCR blocks and get their text filled in.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Kraken's segmentation bounds check rejects coordinates where any point
satisfies x >= im.width or y >= im.height (strictly >=, not >). Using
(cw, ch) as the boundary corner was triggering this for every crop.
Changed to (cw-1, ch-1) so all coordinates are strictly inside the image.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
blla.segment() is a full-page layout detection model that kills the worker
process when called on tiny annotation crops (e.g. 597x89 px). For guided
OCR the annotation region IS already the text line, so segmentation is
unnecessary. Replace the blla call with a single synthetic BaselineLine that
spans the full crop width — rpred then runs recognition on the whole crop.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When a document has manually drawn annotation boxes, the user can now
enable "Nur annotierte Bereiche" in the OCR trigger panel. The engine
skips layout detection entirely and runs recognition only within the
pre-drawn bounding boxes, preserving manual transcription blocks.
- Python: adds OcrRegion model, extend OcrRequest/OcrBlock; guided
branch in /ocr/stream groups by page and crops each region
- Engines: add extract_region_text() to both Kraken and Surya
- Java: adds OcrBlockResult.annotationId, OcrClient.OcrRegion,
TriggerOcrDTO.useExistingAnnotations; OcrAsyncRunner dispatches to
upsertGuidedBlock when annotationId is present; OcrService threads
the flag through to runSingleDocument
- TranscriptionService: adds upsertGuidedBlock (creates, updates OCR,
or preserves MANUAL blocks)
- Frontend: guided OCR toggle in OcrTrigger shown when blocks exist;
skips destructive-replace confirmation in guided mode
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add /segtrain endpoint to OCR service (ZIP upload, ketos.segtrain,
backup rotation, in-process model reload)
- Add segtrainModel() to OcrClient and RestClientOcrClient (10-min timeout,
X-Training-Token header)
- Add SegmentationTrainingExportService: PAGE XML export with polygon
de-normalization and per-page PNG rendering via PDFBox
- Add GET /api/ocr/segmentation-training-data/export endpoint
- Make TranscriptionBlock.text nullable for segmentation-only blocks
(V31 migration)
- Add Paraglide i18n translation keys for all training UI strings (de/en/es)
- Pass source prop from TranscriptionEditView to TranscriptionBlock
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Convert TrainingHistory, OcrTrainingCard, SegmentationTrainingCard, and
TranscriptionBlock "Nur Segmentierung" badge to use Paraglide message keys
- Add availableSegBlocks to TrainingInfoResponse to expose segmentation
block count in the training info endpoint
- Wire SegmentationTrainingCard into admin/system page below OCR training card
- Update api.ts with availableSegBlocks field
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- POST /train in ocr-service with ZIP Slip validation, TemporaryDirectory,
ketos transfer learning, timestamped backups (keep last 3), in-process reload
- X-Training-Token auth (no-op in dev when TRAINING_TOKEN env is empty)
- trainModel() in OcrClient interface + RestClientOcrClient (10-min timeout,
multipart upload, forwards X-Training-Token when configured)
- TRAINING_TOKEN env var wired in docker-compose; --workers 2 in Dockerfile
so /health stays responsive during synchronous training
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- TrainingDataExportService: PDFBox rendering at 300 DPI, crop by
annotation coordinates, ZIP with <uuid>.png + <uuid>.gt.txt pairs
- Skips documents with missing S3 files (logs WARN, continues)
- GET /api/ocr/training-data/export (ADMIN); 204 when no enrolled blocks
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>