devops(backend): switch to multi-stage Docker build #238

Merged
marcel merged 4 commits from devops/multi-stage-docker-build into main 2026-04-15 11:33:04 +02:00
Owner

Summary

  • Replaces spring-boot:run at container startup with a proper multi-stage build
  • Stage 1 (builder): compiles JAR using BuildKit cache mount for ~/.m2 — subsequent builds skip dependency downloads
  • Stage 2 (runtime): eclipse-temurin:21-jre with only app.jar — smaller image, no JDK in production
  • Removes ./backend:/app source mount and maven_cache named volume from docker-compose.yml

Motivation

The previous setup recompiled the entire project on every container start, causing:

  • 90+ second cold starts
  • Restart loops while health checks fired during compilation
  • Fragile incremental builds mixing old and new class files in the bind-mounted target/

Deploy

docker compose up -d --build

Subsequent rebuilds are fast: only the src/ layer is invalidated when source changes; the dependency layer stays cached.

Test plan

  • docker compose build backend succeeds
  • docker compose up -d backend starts container cleanly
  • Backend reaches Started FamilienarchivApplication with no errors
  • /actuator/health returns UP

🤖 Generated with Claude Code

## Summary - Replaces `spring-boot:run` at container startup with a proper multi-stage build - Stage 1 (builder): compiles JAR using BuildKit cache mount for `~/.m2` — subsequent builds skip dependency downloads - Stage 2 (runtime): `eclipse-temurin:21-jre` with only `app.jar` — smaller image, no JDK in production - Removes `./backend:/app` source mount and `maven_cache` named volume from `docker-compose.yml` ## Motivation The previous setup recompiled the entire project on every container start, causing: - 90+ second cold starts - Restart loops while health checks fired during compilation - Fragile incremental builds mixing old and new class files in the bind-mounted `target/` ## Deploy ```bash docker compose up -d --build ``` Subsequent rebuilds are fast: only the `src/` layer is invalidated when source changes; the dependency layer stays cached. ## Test plan - [x] `docker compose build backend` succeeds - [x] `docker compose up -d backend` starts container cleanly - [x] Backend reaches `Started FamilienarchivApplication` with no errors - [x] `/actuator/health` returns UP 🤖 Generated with [Claude Code](https://claude.ai/claude-code)
marcel added 1 commit 2026-04-15 11:16:51 +02:00
devops(backend): switch to multi-stage Docker build
Some checks failed
CI / Unit & Component Tests (push) Failing after 2s
CI / Backend Unit Tests (push) Failing after 1s
CI / Unit & Component Tests (pull_request) Failing after 1s
CI / Backend Unit Tests (pull_request) Failing after 1s
d943aac3e3
Replace runtime mvn spring-boot:run with a proper multi-stage build:
- Stage 1 (builder): compiles JAR with BuildKit cache mount for ~/.m2
- Stage 2 (runtime): eclipse-temurin:21-jre with only the JAR

Removes the backend source volume mount and maven_cache named volume.
Deploy with: docker compose up -d --build

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Author
Owner

👨‍💻 Felix Brandt — Senior Fullstack Developer

Verdict: Approved

No production code touched, no business logic changed, no test files affected. This is infrastructure only and the fix is the right call — I was the one hitting the restart loops during development.

Suggestions

Use -Dmaven.test.skip=true instead of -DskipTests in the Dockerfile

RUN --mount=type=cache,target=/root/.m2 ./mvnw clean package -Dmaven.test.skip=true -q

-DskipTests (= -Dmaven.surefire.skip=true) still compiles test sources. -Dmaven.test.skip=true skips both compilation and execution. Since we never run tests in the image, skipping test compilation saves time and avoids pulling test-only dependencies. If we ever have a test-only compile error (like a missing class referenced in a test), -DskipTests fails the Docker build; -Dmaven.test.skip=true does not.

The *.jar glob is safe — but document why

# Spring Boot repackages to familienarchiv-0.0.1-SNAPSHOT.jar;
# the original pre-repackage artifact has a .jar.original extension, not .jar.
COPY --from=builder /app/target/*.jar app.jar

The glob works because Spring Boot Maven Plugin renames the pre-repackage artifact to .jar.original. A one-line comment prevents future confusion if someone wonders "what if there are two JARs?"

## 👨‍💻 Felix Brandt — Senior Fullstack Developer **Verdict: ✅ Approved** No production code touched, no business logic changed, no test files affected. This is infrastructure only and the fix is the right call — I was the one hitting the restart loops during development. ### Suggestions **Use `-Dmaven.test.skip=true` instead of `-DskipTests` in the Dockerfile** ```dockerfile RUN --mount=type=cache,target=/root/.m2 ./mvnw clean package -Dmaven.test.skip=true -q ``` `-DskipTests` (= `-Dmaven.surefire.skip=true`) still compiles test sources. `-Dmaven.test.skip=true` skips both compilation and execution. Since we never run tests in the image, skipping test compilation saves time and avoids pulling test-only dependencies. If we ever have a test-only compile error (like a missing class referenced in a test), `-DskipTests` fails the Docker build; `-Dmaven.test.skip=true` does not. **The `*.jar` glob is safe — but document why** ```dockerfile # Spring Boot repackages to familienarchiv-0.0.1-SNAPSHOT.jar; # the original pre-repackage artifact has a .jar.original extension, not .jar. COPY --from=builder /app/target/*.jar app.jar ``` The glob works because Spring Boot Maven Plugin renames the pre-repackage artifact to `.jar.original`. A one-line comment prevents future confusion if someone wonders "what if there are two JARs?"
Author
Owner

🏛️ Markus Keller — Application Architect

Verdict: Approved

This removes an antipattern — bind-mounting source code into a runtime container and compiling at startup conflates build concerns with runtime concerns. A container image should be an immutable artifact. This PR makes it one.

What's correct

  • Multi-stage build is the right pattern: the builder stage produces the artifact; the runtime stage ships only what runs. The JDK never enters the production image.
  • BuildKit cache mount for ~/.m2 is better than a named Docker volume for build cache — it's managed by BuildKit, not Docker Compose, and doesn't leak into the running container.
  • Removing maven_cache from the named volumes list is correct — it was a workaround for the old runtime-compilation approach and has no place in the new model.

Suggestions

Consider a Compose overlay for environment separation

The current single docker-compose.yml serves both dev and (eventually) production. A docker-compose.prod.yml overlay would allow environment-specific overrides without duplicating the base file:

# Development (default)
docker compose up -d

# Production (overlay: different ports, no mailpit, Hetzner S3 instead of MinIO)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

This isn't a blocker for this PR — it's the next natural step once the build pipeline is solid.

restart: unless-stopped behaviour changes with a pre-built JAR

Under the old setup, a bad startup (e.g., Flyway migration failure) resulted in a 90-second pause per retry because compilation was happening. With a pre-built JAR, startup is ~15 seconds, so the restart loop is much tighter. Not a problem today but something to be aware of if a bad migration ships — it will hammer the database faster. No change needed now.

## 🏛️ Markus Keller — Application Architect **Verdict: ✅ Approved** This removes an antipattern — bind-mounting source code into a runtime container and compiling at startup conflates build concerns with runtime concerns. A container image should be an immutable artifact. This PR makes it one. ### What's correct - Multi-stage build is the right pattern: the builder stage produces the artifact; the runtime stage ships only what runs. The JDK never enters the production image. - BuildKit cache mount for `~/.m2` is better than a named Docker volume for build cache — it's managed by BuildKit, not Docker Compose, and doesn't leak into the running container. - Removing `maven_cache` from the named volumes list is correct — it was a workaround for the old runtime-compilation approach and has no place in the new model. ### Suggestions **Consider a Compose overlay for environment separation** The current single `docker-compose.yml` serves both dev and (eventually) production. A `docker-compose.prod.yml` overlay would allow environment-specific overrides without duplicating the base file: ```bash # Development (default) docker compose up -d # Production (overlay: different ports, no mailpit, Hetzner S3 instead of MinIO) docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d ``` This isn't a blocker for this PR — it's the next natural step once the build pipeline is solid. **`restart: unless-stopped` behaviour changes with a pre-built JAR** Under the old setup, a bad startup (e.g., Flyway migration failure) resulted in a 90-second pause per retry because compilation was happening. With a pre-built JAR, startup is ~15 seconds, so the restart loop is much tighter. Not a problem today but something to be aware of if a bad migration ships — it will hammer the database faster. No change needed now.
Author
Owner

🧪 Sara Holt — QA Engineer

Verdict: Approved

No test files changed. No test infrastructure affected. Testcontainers-based integration tests are entirely independent of the Docker Compose setup and are not impacted by this change.

Observations

Test execution is now decoupled from deployment — make this explicit

The old setup ran spring-boot:run which (awkwardly) included test compilation as part of startup. With package -DskipTests, tests are explicitly not run during the image build. This is correct, but it means there must be a separate step for running tests — either locally (./mvnw test) or in CI.

If CI doesn't exist yet, the PR description should note the expected test command so the workflow is unambiguous:

# Build and verify (run separately before docker compose up --build)
cd backend && ./mvnw test
cd backend && ./mvnw clean package -DskipTests
docker compose up -d --build

The manual test checklist in the PR description is the right instinct

The four checked items (build, start, actuator health) represent a minimal smoke test. Once CI is configured, these should be automated assertions — not manual checks. A post-deploy smoke test against /actuator/health is straightforward to automate.

start_period: 60s is now very conservative

With compilation removed from startup, the backend starts in ~15 seconds (JVM init + Flyway). The first health check doesn't fire for 60 seconds. Not a correctness issue, but it means failed starts are detected 45 seconds later than necessary. Consider reducing to 30s.

## 🧪 Sara Holt — QA Engineer **Verdict: ✅ Approved** No test files changed. No test infrastructure affected. Testcontainers-based integration tests are entirely independent of the Docker Compose setup and are not impacted by this change. ### Observations **Test execution is now decoupled from deployment — make this explicit** The old setup ran `spring-boot:run` which (awkwardly) included test compilation as part of startup. With `package -DskipTests`, tests are explicitly not run during the image build. This is correct, but it means there must be a separate step for running tests — either locally (`./mvnw test`) or in CI. If CI doesn't exist yet, the PR description should note the expected test command so the workflow is unambiguous: ```bash # Build and verify (run separately before docker compose up --build) cd backend && ./mvnw test cd backend && ./mvnw clean package -DskipTests docker compose up -d --build ``` **The manual test checklist in the PR description is the right instinct** The four checked items (build, start, actuator health) represent a minimal smoke test. Once CI is configured, these should be automated assertions — not manual checks. A post-deploy smoke test against `/actuator/health` is straightforward to automate. **`start_period: 60s` is now very conservative** With compilation removed from startup, the backend starts in ~15 seconds (JVM init + Flyway). The first health check doesn't fire for 60 seconds. Not a correctness issue, but it means failed starts are detected 45 seconds later than necessary. Consider reducing to 30s.
Author
Owner

🔒 Nora "NullX" Steiner — Application Security Engineer

Verdict: ⚠️ Approved with concerns

The move to a runtime JRE image is a genuine security improvement — the JDK compiler, jshell, and jcmd tools are not present in eclipse-temurin:21-jre, reducing the attacker's toolkit after a container escape. BuildKit cache mounts don't persist into the runtime image, so no build artifacts or Maven credentials are exposed in the final layer. No secrets appear in the Dockerfile.

One concern needs attention before production use.

Blockers

None that prevent merging for dev use. One concern for production:

Suggestions

Pin image tags to specific digests (important for production)

Both base images use floating tags:

FROM eclipse-temurin:21-jdk AS builder   # moves when Adoptium releases a patch
FROM eclipse-temurin:21-jre              # same

eclipse-temurin:21-jdk is not a pinned version — it will silently update when Adoptium publishes 21.0.8. This means:

  • Builds are non-reproducible
  • A supply-chain compromise of the upstream image would propagate automatically

For production, pin to image digest:

FROM eclipse-temurin:21-jdk@sha256:<digest> AS builder
FROM eclipse-temurin:21-jre@sha256:<digest>

Or at minimum, pin to the full tag including patch version (e.g. eclipse-temurin:21.0.7_6-jdk-jammy). Use Renovate to automate version bump PRs when new patches are released.

Pre-existing concerns (not introduced by this PR)

  • S3_ACCESS_KEY: ${MINIO_ROOT_USER} — root MinIO credentials used for application S3 access. Root can delete all buckets. Create a service account with bucket-scoped permissions before production deployment.
  • "${PORT_DB}:5432" — PostgreSQL port exposed to the host machine. Use expose: ["5432"] in production so only the archive-net Docker network can reach the database.
## 🔒 Nora "NullX" Steiner — Application Security Engineer **Verdict: ⚠️ Approved with concerns** The move to a runtime JRE image is a genuine security improvement — the JDK compiler, `jshell`, and `jcmd` tools are not present in `eclipse-temurin:21-jre`, reducing the attacker's toolkit after a container escape. BuildKit cache mounts don't persist into the runtime image, so no build artifacts or Maven credentials are exposed in the final layer. No secrets appear in the Dockerfile. One concern needs attention before production use. ### Blockers None that prevent merging for dev use. One concern for production: ### Suggestions **Pin image tags to specific digests (important for production)** Both base images use floating tags: ```dockerfile FROM eclipse-temurin:21-jdk AS builder # moves when Adoptium releases a patch FROM eclipse-temurin:21-jre # same ``` `eclipse-temurin:21-jdk` is not a pinned version — it will silently update when Adoptium publishes `21.0.8`. This means: - Builds are non-reproducible - A supply-chain compromise of the upstream image would propagate automatically For production, pin to image digest: ```dockerfile FROM eclipse-temurin:21-jdk@sha256:<digest> AS builder FROM eclipse-temurin:21-jre@sha256:<digest> ``` Or at minimum, pin to the full tag including patch version (e.g. `eclipse-temurin:21.0.7_6-jdk-jammy`). Use Renovate to automate version bump PRs when new patches are released. **Pre-existing concerns (not introduced by this PR)** - `S3_ACCESS_KEY: ${MINIO_ROOT_USER}` — root MinIO credentials used for application S3 access. Root can delete all buckets. Create a service account with bucket-scoped permissions before production deployment. - `"${PORT_DB}:5432"` — PostgreSQL port exposed to the host machine. Use `expose: ["5432"]` in production so only the `archive-net` Docker network can reach the database.
Author
Owner

🎨 Leonie Voss — UI/UX Designer & Accessibility Strategist

Verdict: Approved

No UI changes. No Svelte components. No frontend routes. No CSS. No accessibility-relevant changes.

Checked: no frontend service definition changes in docker-compose.yml. The frontend service, its volumes, and its environment variables are untouched. This PR has zero user-visible impact and no design concerns to raise.

## 🎨 Leonie Voss — UI/UX Designer & Accessibility Strategist **Verdict: ✅ Approved** No UI changes. No Svelte components. No frontend routes. No CSS. No accessibility-relevant changes. Checked: no frontend service definition changes in `docker-compose.yml`. The frontend service, its volumes, and its environment variables are untouched. This PR has zero user-visible impact and no design concerns to raise.
Author
Owner

🛠️ Tobias Wendt — DevOps & Platform Engineer

Verdict: ⚠️ Approved with concerns

The approach is correct and solves a real problem. Multi-stage builds with BuildKit cache mounts are the right pattern for this stack. The dependency layer separation (pom.xmldependency:go-offlinesrc/) means only source changes invalidate the compile step. Good.

Two things need addressing:

Blockers

Missing .dockerignoretarget/ (111MB) is sent to the BuildKit daemon on every build

The backend's target/ directory is 111MB of compiled classes, JARs, test reports, and surefire output. Without a .dockerignore, Docker sends the entire build context to the daemon before the first instruction executes. On a fresh CI runner or new developer machine, that's 111MB of wasted transfer before a single layer runs. The cached build appeared fast locally because the daemon already had the context, but cold builds will be noticeably slower.

Create backend/.dockerignore:

target/
.git/
*.md
api_tests/

The three COPY instructions only need .mvn/, mvnw, pom.xml, and src/ — everything else in the build context is noise.

Suggestions

eclipse-temurin:21-jdk and eclipse-temurin:21-jre are floating tags

Per the Tobias Wendt house rule: :latest is not a version, and neither is :21-jdk. When Adoptium releases 21.0.8, the tag moves, builds become non-reproducible, and rollback is impossible.

Pin to the full distribution tag:

FROM eclipse-temurin:21.0.7_6-jdk-jammy AS builder
FROM eclipse-temurin:21.0.7_6-jre-jammy

Then add Renovate to automate version bump PRs. This is a one-time setup cost that pays dividends forever.

dependency:go-offline is an approximation

dependency:go-offline downloads declared POM dependencies but misses most Maven plugin dependencies (Surefire, Jacoco, Spring Boot Maven plugin). The first build with a cold cache downloads plugin deps during mvnw clean package, which works — they get cached in the BuildKit cache mount for subsequent runs. But the dependency:go-offline step itself can be slow and only provides partial cache priming. An alternative that achieves the same layer separation more reliably:

# Option: download dependencies by running the full build dry-run
RUN --mount=type=cache,target=/root/.m2 ./mvnw clean package -Dmaven.test.skip=true -q --fail-at-end || true

Or simply drop the dependency:go-offline step and accept that the first cold build is slower — subsequent builds use the cached layer anyway.

start_period: 60s should be reduced now that compilation is gone

The old 60s accounted for 90+ seconds of compilation (with 60s headroom before first health check). With a pre-built JAR, Spring Boot + Flyway starts in ~15 seconds. start_period: 30s is sufficient and means failed starts are caught 30 seconds sooner.

healthcheck:
  start_period: 30s  # was 60s — JAR starts in ~15s, no compilation delay
  interval: 15s
  timeout: 5s
  retries: 5

What is done well

  • BuildKit cache mount for ~/.m2 is correct — avoids the permission and lifecycle issues of the old named volume
  • JRE-only runtime image is correct — removes the JDK from the attack surface
  • ./import:/import volume retained — runtime data correctly kept as a bind mount
  • maven_cache named volume cleanly removed from both the service definition and the global volumes block
## 🛠️ Tobias Wendt — DevOps & Platform Engineer **Verdict: ⚠️ Approved with concerns** The approach is correct and solves a real problem. Multi-stage builds with BuildKit cache mounts are the right pattern for this stack. The dependency layer separation (`pom.xml` → `dependency:go-offline` → `src/`) means only source changes invalidate the compile step. Good. Two things need addressing: ### Blockers **Missing `.dockerignore` — `target/` (111MB) is sent to the BuildKit daemon on every build** The backend's `target/` directory is 111MB of compiled classes, JARs, test reports, and surefire output. Without a `.dockerignore`, Docker sends the entire build context to the daemon before the first instruction executes. On a fresh CI runner or new developer machine, that's 111MB of wasted transfer before a single layer runs. The cached build appeared fast locally because the daemon already had the context, but cold builds will be noticeably slower. Create `backend/.dockerignore`: ``` target/ .git/ *.md api_tests/ ``` The three `COPY` instructions only need `.mvn/`, `mvnw`, `pom.xml`, and `src/` — everything else in the build context is noise. ### Suggestions **`eclipse-temurin:21-jdk` and `eclipse-temurin:21-jre` are floating tags** Per the Tobias Wendt house rule: `:latest` is not a version, and neither is `:21-jdk`. When Adoptium releases 21.0.8, the tag moves, builds become non-reproducible, and rollback is impossible. Pin to the full distribution tag: ```dockerfile FROM eclipse-temurin:21.0.7_6-jdk-jammy AS builder FROM eclipse-temurin:21.0.7_6-jre-jammy ``` Then add Renovate to automate version bump PRs. This is a one-time setup cost that pays dividends forever. **`dependency:go-offline` is an approximation** `dependency:go-offline` downloads declared POM dependencies but misses most Maven plugin dependencies (Surefire, Jacoco, Spring Boot Maven plugin). The first build with a cold cache downloads plugin deps during `mvnw clean package`, which works — they get cached in the BuildKit cache mount for subsequent runs. But the `dependency:go-offline` step itself can be slow and only provides partial cache priming. An alternative that achieves the same layer separation more reliably: ```dockerfile # Option: download dependencies by running the full build dry-run RUN --mount=type=cache,target=/root/.m2 ./mvnw clean package -Dmaven.test.skip=true -q --fail-at-end || true ``` Or simply drop the `dependency:go-offline` step and accept that the first cold build is slower — subsequent builds use the cached layer anyway. **`start_period: 60s` should be reduced now that compilation is gone** The old 60s accounted for 90+ seconds of compilation (with 60s headroom before first health check). With a pre-built JAR, Spring Boot + Flyway starts in ~15 seconds. `start_period: 30s` is sufficient and means failed starts are caught 30 seconds sooner. ```yaml healthcheck: start_period: 30s # was 60s — JAR starts in ~15s, no compilation delay interval: 15s timeout: 5s retries: 5 ``` **What is done well** - BuildKit cache mount for `~/.m2` is correct — avoids the permission and lifecycle issues of the old named volume - JRE-only runtime image is correct — removes the JDK from the attack surface - `./import:/import` volume retained — runtime data correctly kept as a bind mount - `maven_cache` named volume cleanly removed from both the service definition and the global volumes block
marcel added 3 commits 2026-04-15 11:32:03 +02:00
Prevents 111MB of compiled output from being sent to the BuildKit daemon
on cold builds. Only .mvn/, mvnw, pom.xml, and src/ are needed by the
three COPY instructions in the Dockerfile.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Pin to eclipse-temurin:21.0.10_7-{jdk,jre}-noble for reproducible builds
- Switch -DskipTests to -Dmaven.test.skip=true: skips test compilation entirely,
  not just execution — faster and avoids build failures from test-only missing classes
- Add comment on COPY *.jar explaining why the glob is safe (Spring Boot renames
  the pre-repackage artifact to .jar.original, leaving only one .jar in target/)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
devops(backend): reduce healthcheck start_period to 30s
Some checks failed
CI / Unit & Component Tests (push) Failing after 2s
CI / Backend Unit Tests (push) Failing after 1s
CI / Unit & Component Tests (pull_request) Failing after 1s
CI / Backend Unit Tests (pull_request) Failing after 2s
bbafbe6a07
With a pre-built JAR, Spring Boot + Flyway starts in ~15 seconds.
The previous 60s was sized for runtime compilation (90+ seconds).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Author
Owner

Review concerns addressed

All actionable concerns from the six-persona review have been resolved.


Blocker fixed — missing .dockerignore (Tobias)

Commit: eee2d4fdevops(backend): add .dockerignore to exclude target/ from build context

Added backend/.dockerignore excluding target/, .git/, *.md, api_tests/. Build context transfer dropped from ~1MB (warm cache) to 97KB — and cold builds no longer send 111MB of compiled output to the daemon.


Fixed — -DskipTests-Dmaven.test.skip=true (Felix)

Commit: 3865a9cdevops(backend): pin eclipse-temurin tags, skip test compilation, document jar glob

-Dmaven.test.skip=true skips test compilation entirely, not just execution. Faster build and immune to test-only compile errors (e.g. a missing class referenced only in test code).


Fixed — *.jar glob documented (Felix)

Commit: 3865a9c — same commit

Added inline comment on the COPY --from=builder line explaining why the glob is safe (Spring Boot Maven Plugin renames the pre-repackage artifact to .jar.original).


Fixed — image tags pinned to 21.0.10_7 (Tobias + Nora)

Commit: 3865a9c — same commit

Both base images pinned to eclipse-temurin:21.0.10_7-{jdk,jre}-noble. Builds are now reproducible and safe from silent upstream tag mutations.


Fixed — start_period: 60s30s (Tobias + Sara)

Commit: bbafbe6devops(backend): reduce healthcheck start_period to 30s

JAR starts in ~15 seconds. The 60s was sized for runtime compilation. 30s gives 2x headroom with faster detection of failed starts.


Deferred

  • dependency:go-offline approximation — noted by Tobias, works correctly in practice, deferred
  • Compose overlay for environments — tracked as issue #239
  • MinIO root credentials / PostgreSQL port — pre-existing, tracked separately
## Review concerns addressed All actionable concerns from the six-persona review have been resolved. --- ### Blocker fixed — missing `.dockerignore` (Tobias) **Commit:** `eee2d4f` — `devops(backend): add .dockerignore to exclude target/ from build context` Added `backend/.dockerignore` excluding `target/`, `.git/`, `*.md`, `api_tests/`. Build context transfer dropped from ~1MB (warm cache) to **97KB** — and cold builds no longer send 111MB of compiled output to the daemon. --- ### Fixed — `-DskipTests` → `-Dmaven.test.skip=true` (Felix) **Commit:** `3865a9c` — `devops(backend): pin eclipse-temurin tags, skip test compilation, document jar glob` `-Dmaven.test.skip=true` skips test compilation entirely, not just execution. Faster build and immune to test-only compile errors (e.g. a missing class referenced only in test code). --- ### Fixed — `*.jar` glob documented (Felix) **Commit:** `3865a9c` — same commit Added inline comment on the `COPY --from=builder` line explaining why the glob is safe (Spring Boot Maven Plugin renames the pre-repackage artifact to `.jar.original`). --- ### Fixed — image tags pinned to `21.0.10_7` (Tobias + Nora) **Commit:** `3865a9c` — same commit Both base images pinned to `eclipse-temurin:21.0.10_7-{jdk,jre}-noble`. Builds are now reproducible and safe from silent upstream tag mutations. --- ### Fixed — `start_period: 60s` → `30s` (Tobias + Sara) **Commit:** `bbafbe6` — `devops(backend): reduce healthcheck start_period to 30s` JAR starts in ~15 seconds. The 60s was sized for runtime compilation. 30s gives 2x headroom with faster detection of failed starts. --- ### Deferred - `dependency:go-offline` approximation — noted by Tobias, works correctly in practice, deferred - Compose overlay for environments — tracked as issue #239 - MinIO root credentials / PostgreSQL port — pre-existing, tracked separately
marcel merged commit 57c44cf02f into main 2026-04-15 11:33:04 +02:00
marcel deleted branch devops/multi-stage-docker-build 2026-04-15 11:33:04 +02:00
Sign in to join this conversation.
No Reviewers
No Label
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: marcel/familienarchiv#238