Add test for 1×1 image (sub-tile-size) resilience and narrow preprocess_page
fallback from except Exception to (cv2.error, ValueError, MemoryError) so
programming errors propagate instead of being silently swallowed.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- _model_is_loadable: narrow bare except to (RuntimeError, OSError, ValueError)
with DEBUG-level fallback for unexpected exceptions — prevents silent masking
of missing kraken install or AttributeError on vgsl
- _run_segtrain: replace bare except:pass with log.warning so height-check
fallback is visible in container logs
- New test_ensure_blla_model.py: covers model-OK early return, incompatible
model rename+replace, and missing model download paths
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three issues fixed:
1. --resize both was removed in ketos 7; replaced with --resize union
which extends the model's class mapping to include training data classes.
2. ketos ignores -s when -i is present, so the 1800px blla model caused
7+ GB peak RAM and OOM-killed the host (no swap, 5 GB free).
Now checks the loaded model's input height: only uses the base model
when it was already fine-tuned at 800px; otherwise trains from scratch
at 800px (~200 MB peak). After the first run the trained 800px model
becomes the base for all subsequent fine-tuning runs.
3. segtrain now computes and returns cer = 1 - accuracy, matching the
recognition training path.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds ensure_blla_model.py which loads the blla segmentation model with
ketos on every container start. If the model is missing or in the legacy
PyTorch ZIP format (incompatible with ketos 7), it re-downloads the
correct CoreML protobuf model from Zenodo (DOI 10.5281/zenodo.14602569).
The Dockerfile now uses entrypoint.sh which runs this check before
starting uvicorn.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_check_training_token previously skipped auth when TRAINING_TOKEN was
empty, allowing unauthenticated requests to reach /train and /segtrain.
Now returns 503 ("Training not configured on this node") when the token
is absent, so missing configuration fails closed rather than open.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two workers × ~5 GB Surya model load = ~10 GB required, exceeding the
8 GB memory cap and causing OOM on the first /train call. Two OS
processes also cause model-state divergence after training, contradicting
the single-node constraint documented in ADR-001.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ketos 7 defaults to safetensors output, but kraken's load_any() only
handles CoreML (.mlmodel). Adding --weights-format coreml ensures the
hot-swap after training produces a file that load_any() can parse.
Also fixed _find_best_model to look for best_<score>.mlmodel (produced
by --weights-format coreml) in addition to the previous checkpoint_*
pattern.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Kraken 7 removed support for the legacy `path` format (image + .gt.txt
pairs) in VGSLRecognitionDataModule despite the CLI still advertising it.
Switching to PAGE XML (-f page) format which is the supported standard.
- Java export now writes .xml alongside .png (PAGE XML with TextLine,
Baseline at 75% height, and Unicode transcription)
- XML special characters in transcription text are escaped (& < >)
- Python trainer globs *.xml and passes -f page to ketos train
- Regenerated frontend API types to include cer/loss/accuracy/epochs on
OcrTrainingRun (were missing, causing empty CER column in history)
- Updated and extended TrainingDataExportServiceTest
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ketos segtrain has no batch-size flag (-B), so with the default 1800px
input height the intermediate CNN feature maps consume ~500 MB+ per
image, causing the kernel OOM-killer (exit -9) to terminate the process.
On first run (no existing blla.mlmodel), override the VGSL spec to use
800px height instead. Subsequent runs load the saved model with
--resize both, preserving incremental fine-tuning.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Force CPU-only training (--device cpu), cap OpenMP/BLAS thread pool at 2
(--threads 2), and reduce epochs from 50 to 10 (-N 10). 50 epochs on a
laptop OOM-killed the container. 10 epochs is sufficient for incremental
fine-tuning runs; more data is added over time and training re-run.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DataLoader worker subprocesses crash inside Docker due to multiprocessing
fork restrictions. Pass --workers 0 to both ketos train and ketos segtrain
so data loading runs in the main process.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
kraken.ketos has no .train or .segtrain attributes in Kraken 7 — both are
only exposed as CLI commands. Rewrites both training functions to invoke
`ketos train` / `ketos segtrain` via subprocess and parse the best
val_metric from checkpoint filenames.
Also fixes the OcrTrainingCard history so it only shows non-blla runs
(recognition model), matching SegmentationTrainingCard which already
filtered to blla-only.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
After each training run, the Character Error Rate (CER = 1 - accuracy),
loss, accuracy, and epoch count are now stored on the OcrTrainingRun
record and shown in the training history table.
Also adds the missing POST /api/ocr/segtrain endpoint and the
triggerSegTraining service method so the segmentation training card
can actually trigger training.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Kraken's segmentation bounds check rejects coordinates where any point
satisfies x >= im.width or y >= im.height (strictly >=, not >). Using
(cw, ch) as the boundary corner was triggering this for every crop.
Changed to (cw-1, ch-1) so all coordinates are strictly inside the image.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
blla.segment() is a full-page layout detection model that kills the worker
process when called on tiny annotation crops (e.g. 597x89 px). For guided
OCR the annotation region IS already the text line, so segmentation is
unnecessary. Replace the blla call with a single synthetic BaselineLine that
spans the full crop width — rpred then runs recognition on the whole crop.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When a document has manually drawn annotation boxes, the user can now
enable "Nur annotierte Bereiche" in the OCR trigger panel. The engine
skips layout detection entirely and runs recognition only within the
pre-drawn bounding boxes, preserving manual transcription blocks.
- Python: adds OcrRegion model, extend OcrRequest/OcrBlock; guided
branch in /ocr/stream groups by page and crops each region
- Engines: add extract_region_text() to both Kraken and Surya
- Java: adds OcrBlockResult.annotationId, OcrClient.OcrRegion,
TriggerOcrDTO.useExistingAnnotations; OcrAsyncRunner dispatches to
upsertGuidedBlock when annotationId is present; OcrService threads
the flag through to runSingleDocument
- TranscriptionService: adds upsertGuidedBlock (creates, updates OCR,
or preserves MANUAL blocks)
- Frontend: guided OCR toggle in OcrTrigger shown when blocks exist;
skips destructive-replace confirmation in guided mode
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add /segtrain endpoint to OCR service (ZIP upload, ketos.segtrain,
backup rotation, in-process model reload)
- Add segtrainModel() to OcrClient and RestClientOcrClient (10-min timeout,
X-Training-Token header)
- Add SegmentationTrainingExportService: PAGE XML export with polygon
de-normalization and per-page PNG rendering via PDFBox
- Add GET /api/ocr/segmentation-training-data/export endpoint
- Make TranscriptionBlock.text nullable for segmentation-only blocks
(V31 migration)
- Add Paraglide i18n translation keys for all training UI strings (de/en/es)
- Pass source prop from TranscriptionEditView to TranscriptionBlock
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- POST /train in ocr-service with ZIP Slip validation, TemporaryDirectory,
ketos transfer learning, timestamped backups (keep last 3), in-process reload
- X-Training-Token auth (no-op in dev when TRAINING_TOKEN env is empty)
- trainModel() in OcrClient interface + RestClientOcrClient (10-min timeout,
multipart upload, forwards X-Training-Token when configured)
- TRAINING_TOKEN env var wired in docker-compose; --workers 2 in Dockerfile
so /health stays responsive during synchronous training
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
BaselineOCRRecord has 'baseline' and 'boundary' attributes, not 'line'
and 'cuts'. The fallback used record.line which doesn't exist, causing
AttributeError on every Kurrent OCR page.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Validates PDF download URLs against an ALLOWED_PDF_HOSTS allowlist
(default: minio,localhost,127.0.0.1) and disables redirect following
to prevent redirect-based SSRF.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Changed ocr-service dependency from service_healthy to service_started
since the backend already handles OCR unavailability gracefully. Removed
unused APP_S3_INTERNAL_URL env var. Added expose directive and
.dockerignore for ocr-service.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Cover Surya polygon/word-level extraction, health endpoint states,
Kraken script-type routing, 503 when models not ready, 400 when
Kraken unavailable for Kurrent, and confidence marker application
during streaming. Production code coverage: 88%.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The PDF viewer uses 1-based currentPage (starting at 1) but the OCR
engines produced 0-based pageNumber from enumerate(). Annotations
created by OCR were assigned to page 0, which doesn't exist in the
viewer. Change enumerate() to start=1 in both engines and the
streaming endpoint.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Streams one JSON line per completed page instead of buffering the
entire result. Emits start/page/error/done events. On per-page
failure, logs the traceback but yields a generic error message and
continues with the next page. Adds X-Accel-Buffering: no and
Cache-Control: no-cache headers for reverse-proxy compatibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Enable per-page processing by extracting the inner loop body of
extract_blocks() into extract_page_blocks(image, page_idx, language).
The original extract_blocks() now delegates to the new function,
preserving backward compatibility for the batch path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
OCR engines are CPU-bound and were blocking Uvicorn's single async
event loop, making /health unresponsive during processing. This caused
new OCR requests to fail silently (health check failure → no DB record
→ UI shows NONE). Wrap engine calls in asyncio.to_thread() to keep the
event loop free. Also surface OCR trigger errors in the frontend
instead of silently resetting the spinner.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pydantic v2 Field(alias=...) doesn't work with FastAPI as expected.
The Java client sends camelCase (pdfUrl, scriptType, pageNumber).
Use camelCase field names directly instead of aliases.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Surya models lazy-load on first OCR request instead of at startup
(saves ~3-4GB idle RAM — Kraken stays eager at ~16MB)
- Process one page at a time in Surya engine (limits peak memory)
- RECOGNITION_BATCH_SIZE=1, DETECTOR_BATCH_SIZE=1 (slower but fits in RAM)
- Revert mem_limit back to 6GB (sufficient with these optimizations)
- Render DPI stays at 200
Idle memory: ~2GB (Kraken only). Peak during OCR: ~5-6GB (Surya loaded).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Surya 0.17 models use ~5GB idle. At 300 DPI on a multi-page PDF,
page images + inference tensors push past the 6GB limit, causing
OOM kills during 'Detecting bboxes'. Increased to 10GB and reduced
render DPI to 200 (still sufficient for OCR, uses ~44% less memory).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The OCR service was getting 403 Forbidden because it tried to
download PDFs from MinIO using plain internal URLs without
authentication. MinIO buckets are private.
- Add S3Presigner bean to MinioConfig
- FileService.generatePresignedUrl(): generates 15-min presigned URLs
- OcrService uses presigned URLs instead of plain internal URLs
- Remove unused s3InternalUrl / bucketName @Value fields from OcrService
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Kraken 7 requires pyvips (optional dep) for -f pdf mode.
Added libvips42 system package and pyvips Python package.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
torchvision installed from PyPI expects CUDA torch operator
registrations. Installing from the CPU whl index ensures torchvision
matches the CPU-only torch build. Fixes 'torchvision::nms does not
exist' RuntimeError on startup.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
transformers 5.x breaks surya 0.17.1 — SuryaDecoderConfig is missing
pad_token_id. Pin to transformers>=4.56.1,<5.0.0.
Also add torch==2.7.1 to requirements.txt to prevent pip from upgrading
it past the CPU-only build installed in the Dockerfile layer.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
New confidence.py module with two functions:
- apply_confidence_markers(): replaces words below threshold with
[unleserlich], collapses adjacent markers into one
- words_from_characters(): reconstructs word-level confidence from
Kraken's character-level data
Surya 0.17 provides native word-level confidence via line.words.
Kraken 7.0 provides per-character confidences via record.confidences.
Both engines now pass word+confidence data through main.py, which
applies the marker post-processing before returning the API response.
Threshold configurable via OCR_CONFIDENCE_THRESHOLD env var (default 0.3).
Frontend already renders [unleserlich] markers via transcriptionMarkers.ts.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
kraken 5.2.9 required torch~=2.1.0, incompatible with surya-ocr's
torch>=2.3.0. kraken 6.0.3 requires torch>=2.4.0,<=2.9 which
overlaps with surya and our pinned torch==2.5.1.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
surya-ocr 0.6.3 requires pillow<11.0.0,>=10.2.0. The previous
pin at 11.1.0 caused a dependency resolution failure during
Docker build.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Python microservice (ocr-service/):
- FastAPI app with /ocr and /health endpoints
- Surya engine: transformer-based OCR for typewritten/modern handwriting
- Kraken engine: historical HTR for Kurrent/Suetterlin with
pure-Python polygon-to-quad approximation (gift wrapping + rotating calipers)
- Eager model loading at startup via lifespan context manager
- PDF download via httpx, page rendering via pypdfium2 at 300 DPI
Java RestClientOcrClient:
- Implements OcrClient + OcrHealthClient interfaces
- Calls Python service via Spring RestClient
- Health check with graceful fallback
Docker Compose:
- New ocr-service container (mem_limit 6g, no host ports)
- Health check with start_period 60s for model loading
- ocr_models volume for Kraken model files
- Backend depends on ocr-service health
Refs #226, #227
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>