Unbounded limit/offset parameters allow resource exhaustion #11

Open
opened 2026-04-02 11:21:16 +02:00 by marcel · 5 comments
Owner

Problem

The limit and offset query parameters in AdminController, RecipeController, and AdminService have no upper bound validation. A user can set limit=999999 to force loading entire tables into memory. Setting limit=0 causes ArithmeticException in PageRequest.of(offset / limit, limit).

Affected files

  • AdminController.java:26-27limit defaulted to 50 but unbounded
  • RecipeController.java:36-37 — same
  • AdminService.java:45PageRequest.of(offset / limit, limit) divides by limit

Attack scenario

  1. GET /v1/admin/users?limit=0ArithmeticException (division by zero) → stack trace leak (see #7)
  2. GET /v1/recipes?limit=1000000 → OOM or extreme DB load

Clamp limit to 1..100 range and validate offset >= 0. Reject invalid values with 400 Bad Request.

Severity

Medium — denial of service via resource exhaustion; crash via division by zero.

## Problem The `limit` and `offset` query parameters in `AdminController`, `RecipeController`, and `AdminService` have no upper bound validation. A user can set `limit=999999` to force loading entire tables into memory. Setting `limit=0` causes `ArithmeticException` in `PageRequest.of(offset / limit, limit)`. ## Affected files - `AdminController.java:26-27` — `limit` defaulted to 50 but unbounded - `RecipeController.java:36-37` — same - `AdminService.java:45` — `PageRequest.of(offset / limit, limit)` divides by `limit` ## Attack scenario 1. `GET /v1/admin/users?limit=0` → `ArithmeticException` (division by zero) → stack trace leak (see #7) 2. `GET /v1/recipes?limit=1000000` → OOM or extreme DB load ## Recommended fix Clamp `limit` to `1..100` range and validate `offset >= 0`. Reject invalid values with 400 Bad Request. ## Severity Medium — denial of service via resource exhaustion; crash via division by zero.
marcel added the kind/securitypriority/medium labels 2026-04-02 11:21:47 +02:00
Author
Owner

👨‍💻 Kai — Frontend Engineer

This is a backend validation issue, but I want to flag where the frontend touches pagination parameters.

Where I send limit and offset

  • Any recipe list or admin user list in the UI will pass these parameters when fetching paginated data. Right now, if my code sends a limit=0 by accident (e.g., an uninitialized state variable, a misconfigured default), the backend crashes instead of returning a clean error. That's a bad failure mode for my +page.server.ts load functions.

Questions for me

  • Are limit and offset user-controlled in the UI, or are they fixed constants in my fetch calls? If they're fixed (e.g., always limit=25), the attack vector doesn't go through my UI — it's a direct API caller issue. But the fix is still correct.
  • If there's any UI with user-controlled pagination (e.g., a "show more" with a configurable page size), I need to validate client-side too — not as a security control, but to give immediate feedback before the server round-trip.

What I need from the backend fix

  • A clean 400 Bad Request with a structured error body when limit or offset is invalid. Not a 500, not a stack trace (that's issue #7), not an empty response.
  • The error body format should be consistent with other validation errors so my +page.server.ts error handling can treat all validation failures the same way.
  • Confirm the valid range after clamping — is it 1..100 as the issue suggests? I'll document that in any API call utilities I write.
## 👨‍💻 Kai — Frontend Engineer This is a backend validation issue, but I want to flag where the frontend touches pagination parameters. **Where I send `limit` and `offset`** - Any recipe list or admin user list in the UI will pass these parameters when fetching paginated data. Right now, if my code sends a `limit=0` by accident (e.g., an uninitialized state variable, a misconfigured default), the backend crashes instead of returning a clean error. That's a bad failure mode for my `+page.server.ts` load functions. **Questions for me** - Are `limit` and `offset` user-controlled in the UI, or are they fixed constants in my fetch calls? If they're fixed (e.g., always `limit=25`), the attack vector doesn't go through my UI — it's a direct API caller issue. But the fix is still correct. - If there's any UI with user-controlled pagination (e.g., a "show more" with a configurable page size), I need to validate client-side too — not as a security control, but to give immediate feedback before the server round-trip. **What I need from the backend fix** - A clean `400 Bad Request` with a structured error body when `limit` or `offset` is invalid. Not a 500, not a stack trace (that's issue #7), not an empty response. - The error body format should be consistent with other validation errors so my `+page.server.ts` error handling can treat all validation failures the same way. - Confirm the valid range after clamping — is it `1..100` as the issue suggests? I'll document that in any API call utilities I write.
Author
Owner

🛠️ Backend Engineer

Two separate bugs in one issue — worth fixing together but tracking distinctly: a division-by-zero crash and an unbounded resource consumption vector.

Fix for limit=0 (the crash)

The root cause is PageRequest.of(offset / limit, limit) — integer division by zero when limit=0. The fix is straightforward: validate before computing.

Preferred approach using Bean Validation in the controller:

@GetMapping("/users")
public ResponseEntity<Page<UserDto>> getUsers(
    @RequestParam(defaultValue = "0") @Min(0) int offset,
    @RequestParam(defaultValue = "50") @Min(1) @Max(100) int limit
) { ... }

With @Validated on the controller class and a @ExceptionHandler(ConstraintViolationException.class) in a @ControllerAdvice, this produces a clean 400 with no stack trace and no division by zero.

Fix for limit=999999 (the resource exhaustion)

@Max(100) on the limit parameter caps it. The question is: is 100 the right ceiling? Depends on the use case:

  • Admin user list: 100 seems reasonable.
  • Recipe list with potential autocomplete: maybe 50.
  • Shopping list items: could be higher if a household has many items.

The ceiling doesn't need to be the same for every endpoint — use what makes sense for each resource type.

Questions

  • Is PageRequest.of(offset / limit, limit) the right pagination model? This converts an absolute offset to a page number, which is correct but unusual. Spring Data's PageRequest.of(page, size) expects a 0-indexed page number. If offset=50 and limit=25, this gives page 2 — correct. But if offset=30 and limit=25, you get page 1 (integer division), which skips the first 25 items instead of 30. Is this intentional?
  • Should the pagination model be page + size instead of offset + limit? The current approach is offset-based pagination from the API but page-based internally — that mismatch is worth resolving cleanly.
  • Is AdminController the only place with the division-by-zero bug, or does RecipeController have the same PageRequest.of(offset / limit, limit) pattern?
## 🛠️ Backend Engineer Two separate bugs in one issue — worth fixing together but tracking distinctly: a division-by-zero crash and an unbounded resource consumption vector. **Fix for `limit=0` (the crash)** The root cause is `PageRequest.of(offset / limit, limit)` — integer division by zero when `limit=0`. The fix is straightforward: validate before computing. Preferred approach using Bean Validation in the controller: ```java @GetMapping("/users") public ResponseEntity<Page<UserDto>> getUsers( @RequestParam(defaultValue = "0") @Min(0) int offset, @RequestParam(defaultValue = "50") @Min(1) @Max(100) int limit ) { ... } ``` With `@Validated` on the controller class and a `@ExceptionHandler(ConstraintViolationException.class)` in a `@ControllerAdvice`, this produces a clean 400 with no stack trace and no division by zero. **Fix for `limit=999999` (the resource exhaustion)** `@Max(100)` on the `limit` parameter caps it. The question is: is 100 the right ceiling? Depends on the use case: - Admin user list: 100 seems reasonable. - Recipe list with potential autocomplete: maybe 50. - Shopping list items: could be higher if a household has many items. The ceiling doesn't need to be the same for every endpoint — use what makes sense for each resource type. **Questions** - Is `PageRequest.of(offset / limit, limit)` the right pagination model? This converts an absolute offset to a page number, which is correct but unusual. Spring Data's `PageRequest.of(page, size)` expects a 0-indexed page number. If `offset=50` and `limit=25`, this gives page 2 — correct. But if `offset=30` and `limit=25`, you get page 1 (integer division), which skips the first 25 items instead of 30. Is this intentional? - Should the pagination model be `page` + `size` instead of `offset` + `limit`? The current approach is offset-based pagination from the API but page-based internally — that mismatch is worth resolving cleanly. - Is `AdminController` the only place with the division-by-zero bug, or does `RecipeController` have the same `PageRequest.of(offset / limit, limit)` pattern?
Author
Owner

🧪 QA Engineer

Two distinct failure modes here — each needs its own test, and both are easy to reproduce reproducibly.

Tests for the division-by-zero crash (limit=0)

  • shouldReturn400WhenLimitIsZero()GET /v1/admin/users?limit=0 → assert 400, assert no stack trace in response body (see #7).
  • shouldReturn400WhenLimitIsZeroForRecipes() — same for GET /v1/recipes?limit=0.
  • shouldReturn400WhenLimitIsNegative()limit=-1 should also be rejected.

Tests for resource exhaustion (limit=999999)

  • shouldReturn400WhenLimitExceedsMaximum()GET /v1/recipes?limit=101 → assert 400.
  • shouldReturn400WhenLimitIsExtremelyLarge()GET /v1/admin/users?limit=999999 → assert 400, assert the request completes quickly (no DB query should execute with this value).
  • shouldReturn200WithMaxAllowedLimit()GET /v1/recipes?limit=100 → assert 200 (boundary value, valid).
  • shouldReturn200WithMinAllowedLimit()GET /v1/recipes?limit=1 → assert 200 (boundary value, valid).

Tests for offset validation

  • shouldReturn400WhenOffsetIsNegative()offset=-1 → assert 400.
  • shouldReturn200WhenOffsetIsZero()offset=0 → assert 200 (default, valid).

Parameterized test opportunity

All the invalid-parameter cases are a great fit for @ParameterizedTest:

@ParameterizedTest
@ValueSource(ints = {-1, 0, 101, 999999})
void shouldReturn400ForInvalidLimit(int limit) { ... }

Question

  • Is there an existing @ControllerAdvice / @ExceptionHandler for ConstraintViolationException? If not, the test for limit=0 will currently return a 500, not a 400 — which means we need both the validation annotation fix AND the exception handler before the tests can pass. These might need to land together.
## 🧪 QA Engineer Two distinct failure modes here — each needs its own test, and both are easy to reproduce reproducibly. **Tests for the division-by-zero crash (`limit=0`)** - `shouldReturn400WhenLimitIsZero()` — `GET /v1/admin/users?limit=0` → assert 400, assert no stack trace in response body (see #7). - `shouldReturn400WhenLimitIsZeroForRecipes()` — same for `GET /v1/recipes?limit=0`. - `shouldReturn400WhenLimitIsNegative()` — `limit=-1` should also be rejected. **Tests for resource exhaustion (`limit=999999`)** - `shouldReturn400WhenLimitExceedsMaximum()` — `GET /v1/recipes?limit=101` → assert 400. - `shouldReturn400WhenLimitIsExtremelyLarge()` — `GET /v1/admin/users?limit=999999` → assert 400, assert the request completes quickly (no DB query should execute with this value). - `shouldReturn200WithMaxAllowedLimit()` — `GET /v1/recipes?limit=100` → assert 200 (boundary value, valid). - `shouldReturn200WithMinAllowedLimit()` — `GET /v1/recipes?limit=1` → assert 200 (boundary value, valid). **Tests for `offset` validation** - `shouldReturn400WhenOffsetIsNegative()` — `offset=-1` → assert 400. - `shouldReturn200WhenOffsetIsZero()` — `offset=0` → assert 200 (default, valid). **Parameterized test opportunity** All the invalid-parameter cases are a great fit for `@ParameterizedTest`: ```java @ParameterizedTest @ValueSource(ints = {-1, 0, 101, 999999}) void shouldReturn400ForInvalidLimit(int limit) { ... } ``` **Question** - Is there an existing `@ControllerAdvice` / `@ExceptionHandler` for `ConstraintViolationException`? If not, the test for `limit=0` will currently return a 500, not a 400 — which means we need both the validation annotation fix AND the exception handler before the tests can pass. These might need to land together.
Author
Owner

🔒 Sable — Security Engineer

Two vulnerabilities in one issue — both exploitable with a single HTTP request, no authentication bypass required.

Vulnerability 1: Denial of Service via limit=0

  • GET /v1/admin/users?limit=0 throws ArithmeticException (division by zero) in AdminService.
  • Depending on the exception handling configuration (see #7), this likely produces a 500 with a stack trace — leaking internal class names, line numbers, and possibly the query structure.
  • This is a trivially scriptable DoS: a loop sending ?limit=0 repeatedly would generate repeated exceptions, fill logs, and degrade performance. No authentication needed if the endpoint is accessible to any authenticated user.

Vulnerability 2: Resource Exhaustion via limit=999999

  • No authentication bypass needed — any authenticated user can send this.
  • Effect: forces a full table scan on users or recipes, loads all rows into the JPA entity graph, and serializes them into the HTTP response. On a large dataset, this is an OOM or extreme GC pressure event.
  • Even on a small dataset, it forces the DB to process SELECT * FROM recipes LIMIT 999999 — potentially a full sequential scan.

Fix requirements from a security standpoint

  • Reject invalid values with 400 before any DB query executes. The validation must be at the controller or filter level, not inside the service after PageRequest is constructed.
  • The error response body must not include stack traces, internal class names, or query fragments. A structured { "error": "limit must be between 1 and 100" } is fine.
  • The valid range should be encoded in the OpenAPI spec so clients know the contract.

Additional questions

  • Are these endpoints accessible to all authenticated users, or only admins? If GET /v1/admin/users requires admin role, the attack surface is smaller — but admin accounts can still be compromised or misused, so validation still matters.
  • Is there any rate limiting on these endpoints? Even with proper validation, a DoS via high request volume is possible without rate limiting. Out of scope for this issue, but worth tracking.
  • Does the stack trace exposure via limit=0 compound issue #7? If #7 is fixed first (stack traces suppressed in prod), the information leakage component of this bug is mitigated — but the crash itself still needs fixing.
## 🔒 Sable — Security Engineer Two vulnerabilities in one issue — both exploitable with a single HTTP request, no authentication bypass required. **Vulnerability 1: Denial of Service via `limit=0`** - `GET /v1/admin/users?limit=0` throws `ArithmeticException` (division by zero) in `AdminService`. - Depending on the exception handling configuration (see #7), this likely produces a 500 with a stack trace — leaking internal class names, line numbers, and possibly the query structure. - This is a trivially scriptable DoS: a loop sending `?limit=0` repeatedly would generate repeated exceptions, fill logs, and degrade performance. No authentication needed if the endpoint is accessible to any authenticated user. **Vulnerability 2: Resource Exhaustion via `limit=999999`** - No authentication bypass needed — any authenticated user can send this. - Effect: forces a full table scan on `users` or `recipes`, loads all rows into the JPA entity graph, and serializes them into the HTTP response. On a large dataset, this is an OOM or extreme GC pressure event. - Even on a small dataset, it forces the DB to process `SELECT * FROM recipes LIMIT 999999` — potentially a full sequential scan. **Fix requirements from a security standpoint** - Reject invalid values with 400 **before any DB query executes**. The validation must be at the controller or filter level, not inside the service after `PageRequest` is constructed. - The error response body must not include stack traces, internal class names, or query fragments. A structured `{ "error": "limit must be between 1 and 100" }` is fine. - The valid range should be encoded in the OpenAPI spec so clients know the contract. **Additional questions** - Are these endpoints accessible to all authenticated users, or only admins? If `GET /v1/admin/users` requires admin role, the attack surface is smaller — but admin accounts can still be compromised or misused, so validation still matters. - Is there any rate limiting on these endpoints? Even with proper validation, a DoS via high request volume is possible without rate limiting. Out of scope for this issue, but worth tracking. - Does the stack trace exposure via `limit=0` compound issue #7? If #7 is fixed first (stack traces suppressed in prod), the information leakage component of this bug is mitigated — but the crash itself still needs fixing.
Author
Owner

🎨 Atlas — UI/UX Designer

Backend validation issue, but pagination parameters touch the UI in a few ways worth designing around.

Where pagination appears in the UI

  • Any recipe list (screen B1 or similar) that loads pages of results will pass limit and offset. The default limit=50 from the backend is fine for a household-sized recipe collection — but if we ever add a search or filter view, the page size may need to be configurable.
  • The admin user list is an internal screen — likely not designed yet since it's an admin-only view. This gives us a chance to get the pagination UX right from the start.

Design questions

  • Is pagination in the recipe list UI page-based ("Page 1 of 4") or scroll-based ("load more")? The answer affects what limit value I'd spec — a fixed page size for page-based, a smaller chunk size for infinite scroll.
  • If the backend caps limit at 100, does the UI ever need to show more than 100 items at once? For a household recipe collection, probably not. But for admin views, a household might have hundreds of users or recipes.

Error state for invalid pagination

  • If somehow the frontend sends an invalid limit and gets a 400 back, what does the user see? An empty list? A generic error? Since this shouldn't happen in normal UI usage, a simple "couldn't load results, try refreshing" toast is probably sufficient.
  • The more interesting case: what if the offset points past the end of the results? (e.g., the user is on page 3 but items were deleted.) The API should return an empty page, and the UI should show an empty state — not an error. Is the backend designed to return [] in that case rather than a 404?

No design blockers — the fix is pure backend. Just want to make sure error states and pagination UX are defined before Kai implements the list components.

## 🎨 Atlas — UI/UX Designer Backend validation issue, but pagination parameters touch the UI in a few ways worth designing around. **Where pagination appears in the UI** - Any recipe list (screen B1 or similar) that loads pages of results will pass `limit` and `offset`. The default `limit=50` from the backend is fine for a household-sized recipe collection — but if we ever add a search or filter view, the page size may need to be configurable. - The admin user list is an internal screen — likely not designed yet since it's an admin-only view. This gives us a chance to get the pagination UX right from the start. **Design questions** - Is pagination in the recipe list UI page-based ("Page 1 of 4") or scroll-based ("load more")? The answer affects what `limit` value I'd spec — a fixed page size for page-based, a smaller chunk size for infinite scroll. - If the backend caps `limit` at 100, does the UI ever need to show more than 100 items at once? For a household recipe collection, probably not. But for admin views, a household might have hundreds of users or recipes. **Error state for invalid pagination** - If somehow the frontend sends an invalid `limit` and gets a 400 back, what does the user see? An empty list? A generic error? Since this shouldn't happen in normal UI usage, a simple "couldn't load results, try refreshing" toast is probably sufficient. - The more interesting case: what if the `offset` points past the end of the results? (e.g., the user is on page 3 but items were deleted.) The API should return an empty page, and the UI should show an empty state — not an error. Is the backend designed to return `[]` in that case rather than a 404? **No design blockers** — the fix is pure backend. Just want to make sure error states and pagination UX are defined before Kai implements the list components.
Sign in to join this conversation.