Frontend: C3 — Variety review (score breakdown) #28

Closed
opened 2026-04-02 11:29:02 +02:00 by marcel · 6 comments
Owner

Summary

Detailed breakdown of the variety score with protein distribution analysis, sub-scores, and actionable warnings.

Journey: J2 — Plan the week
Role: Planner only
Screen: C3

Layout

Mobile (< 768px)

Stacked layout:

  1. Big score: Fraunces 56px weight 300 + "out of 10" + color description
  2. Progress bar: --yellow fill, 6px height, 120px wide
  3. Score breakdown: 3 rows
    • Protein diversity (e.g., 9/10)
    • Ingredient overlap (e.g., 7/10)
    • Effort balance (e.g., 8/10)
  4. Warnings: --yellow-tint cards with title + explanation

Desktop (> 1024px)

  • Sidebar (224px) + topbar (breadcrumb)
  • Top area (flex row):
    • Left (flex:1): score 72px Fraunces + 200px progress bar + sub-scores in bordered rows
    • Right (320px): 7-day protein grid + effort bar
  • Bottom area: full-width warning cards

Protein Grid (Desktop)

  • 7 columns (one per day), 6px gap, 44px height per cell
  • Colored by protein type
  • Repeated proteins: 2px yellow border highlight

Effort Bar (Desktop)

  • Proportional flex row with colored segments:
    • Easy = green, Medium = yellow, Hard = red
  • Labels: "Easy ×3", "Medium ×3", "Hard ×1"

Warnings

  • Full width, --yellow-tint bg, --radius-lg
  • Title + explanation text
  • Actionable: identifies which day/meal to consider swapping

Acceptance Criteria

  • Mobile: stacked score + breakdown + warnings
  • Desktop: 2-column with protein grid and effort bar
  • Sub-scores: protein diversity, ingredient overlap, effort balance
  • 7-day protein grid with repeat highlighting
  • Effort bar with proportional segments
  • Warning cards with actionable suggestions
## Summary Detailed breakdown of the variety score with protein distribution analysis, sub-scores, and actionable warnings. **Journey:** J2 — Plan the week **Role:** Planner only **Screen:** C3 ## Layout ### Mobile (< 768px) Stacked layout: 1. Big score: Fraunces 56px weight 300 + "out of 10" + color description 2. Progress bar: `--yellow` fill, 6px height, 120px wide 3. Score breakdown: 3 rows - Protein diversity (e.g., 9/10) - Ingredient overlap (e.g., 7/10) - Effort balance (e.g., 8/10) 4. Warnings: `--yellow-tint` cards with title + explanation ### Desktop (> 1024px) - Sidebar (224px) + topbar (breadcrumb) - Top area (flex row): - Left (flex:1): score 72px Fraunces + 200px progress bar + sub-scores in bordered rows - Right (320px): 7-day protein grid + effort bar - Bottom area: full-width warning cards ## Protein Grid (Desktop) - 7 columns (one per day), 6px gap, 44px height per cell - Colored by protein type - Repeated proteins: 2px yellow border highlight ## Effort Bar (Desktop) - Proportional flex row with colored segments: - Easy = green, Medium = yellow, Hard = red - Labels: "Easy ×3", "Medium ×3", "Hard ×1" ## Warnings - Full width, `--yellow-tint` bg, `--radius-lg` - Title + explanation text - Actionable: identifies which day/meal to consider swapping ## Acceptance Criteria - [ ] Mobile: stacked score + breakdown + warnings - [ ] Desktop: 2-column with protein grid and effort bar - [ ] Sub-scores: protein diversity, ingredient overlap, effort balance - [ ] 7-day protein grid with repeat highlighting - [ ] Effort bar with proportional segments - [ ] Warning cards with actionable suggestions
marcel added the kind/featurepriority/medium labels 2026-04-02 11:30:15 +02:00
Author
Owner

Spec file: specs/frontend/j2-plan-the-week.html — screen C3 with mobile (stacked score + breakdown) + desktop (2-col score/protein grid + effort bar + warnings) previews, agent table, and LLM implementation guide.

**Spec file:** [`specs/frontend/j2-plan-the-week.html`](../specs/frontend/j2-plan-the-week.html) — screen C3 with mobile (stacked score + breakdown) + desktop (2-col score/protein grid + effort bar + warnings) previews, agent table, and LLM implementation guide.
Author
Owner

👨‍💻 Kai — Frontend Engineer

C3 is a data visualization screen — the most visually interesting layout in the planner section, and all of it is derived/read-only data. That makes it simpler on the write side but the rendering has a few tricky bits.

Component split for C3

  • VarietyScoreHero — the big Fraunces number (56px mobile / 72px desktop) + progress bar + color description
  • ScoreBreakdownTable — 3 rows: protein diversity, ingredient overlap, effort balance (sub-scores)
  • WarningCards--yellow-tint cards, one per warning, full-width
  • ProteinGrid — desktop-only, 7-column grid, colored cells, repeat highlights
  • EffortBar — desktop-only, proportional flex row with colored segments + labels
  • C3Layout — orchestrates the 2-column desktop layout (flex row top + full-width bottom for warnings)

The protein grid — color mapping

  • 7 columns × up to 7 rows? Or 7 columns × 1 row per day with the protein type shown as a colored cell? The spec says "7 columns (one per day), 6px gap, 44px height per cell" — so it's 7 columns × however many distinct protein types appear (up to 7 if every day is unique).
  • Each cell is colored by protein type — what's the color mapping? Chicken = ?, Fish = ?, Vegetarian = ? This is a design system question for Atlas, but I need to know the mapping before I code the grid.
  • "Repeated proteins: 2px yellow border highlight" — this means: for any protein type that appears more than once across the 7 days, all cells of that type get a --yellow border. How do I define "repeated"? Same day-adjacent, or anywhere in the week?

The effort bar

  • "Proportional flex row with colored segments" — Easy=green, Medium=yellow, Hard=red. The widths are proportional to count: 3 easy out of 7 days = flex: 3, medium flex: 3, hard flex: 1.
  • Labels "Easy ×3" etc. — positioned inside the segment or below it? If inside, minimum segment width needs to accommodate the label text — need to handle the case where a segment is very narrow (e.g., hard × 1 out of 7).

Progress bar — 120px wide on mobile

  • The spec says "120px wide, 6px height". A fixed pixel width on mobile is a smell — it means the bar won't span the full width even on narrow screens. Is this intentional (a fixed-width indicator, not a full-width bar) or should it be responsive?

Questions

  • Does C3 load with the current week's data, or is there a week selector to view historical scores?
  • What's the data source for sub-scores? Are they calculated server-side and returned as part of the variety score response, or computed client-side from raw plan data?
  • What's the empty state for C3 when no meals have been planned yet? (Score: 0? Hidden screen? A "plan some meals first" message?)
  • Are the warning cards actionable (e.g., clicking a warning navigates to the problematic day/slot), or purely informational?
## 👨‍💻 Kai — Frontend Engineer C3 is a data visualization screen — the most visually interesting layout in the planner section, and all of it is derived/read-only data. That makes it simpler on the write side but the rendering has a few tricky bits. **Component split for C3** - `VarietyScoreHero` — the big Fraunces number (56px mobile / 72px desktop) + progress bar + color description - `ScoreBreakdownTable` — 3 rows: protein diversity, ingredient overlap, effort balance (sub-scores) - `WarningCards` — `--yellow-tint` cards, one per warning, full-width - `ProteinGrid` — desktop-only, 7-column grid, colored cells, repeat highlights - `EffortBar` — desktop-only, proportional flex row with colored segments + labels - `C3Layout` — orchestrates the 2-column desktop layout (flex row top + full-width bottom for warnings) **The protein grid — color mapping** - 7 columns × up to 7 rows? Or 7 columns × 1 row per day with the protein type shown as a colored cell? The spec says "7 columns (one per day), 6px gap, 44px height per cell" — so it's 7 columns × however many distinct protein types appear (up to 7 if every day is unique). - Each cell is colored by protein type — what's the color mapping? Chicken = ?, Fish = ?, Vegetarian = ? This is a design system question for Atlas, but I need to know the mapping before I code the grid. - "Repeated proteins: 2px yellow border highlight" — this means: for any protein type that appears more than once across the 7 days, all cells of that type get a `--yellow` border. How do I define "repeated"? Same day-adjacent, or anywhere in the week? **The effort bar** - "Proportional flex row with colored segments" — Easy=green, Medium=yellow, Hard=red. The widths are proportional to count: 3 easy out of 7 days = `flex: 3`, medium `flex: 3`, hard `flex: 1`. - Labels "Easy ×3" etc. — positioned inside the segment or below it? If inside, minimum segment width needs to accommodate the label text — need to handle the case where a segment is very narrow (e.g., hard × 1 out of 7). **Progress bar — 120px wide on mobile** - The spec says "120px wide, 6px height". A fixed pixel width on mobile is a smell — it means the bar won't span the full width even on narrow screens. Is this intentional (a fixed-width indicator, not a full-width bar) or should it be responsive? **Questions** - Does C3 load with the current week's data, or is there a week selector to view historical scores? - What's the data source for sub-scores? Are they calculated server-side and returned as part of the variety score response, or computed client-side from raw plan data? - What's the empty state for C3 when no meals have been planned yet? (Score: 0? Hidden screen? A "plan some meals first" message?) - Are the warning cards actionable (e.g., clicking a warning navigates to the problematic day/slot), or purely informational?
Author
Owner

🛠️ Backend Engineer — Variety Score API

C3 is a read-heavy, computation-heavy screen. The key backend question is whether the variety score and sub-scores are computed on-demand or cached. Let me work through the design.

Variety score API — what does the response look like?

  • GET /api/week-plans/{weekPlanId}/variety-score should return:
    {
      "total": 8.2,
      "description": "Great variety",
      "subScores": {
        "proteinDiversity": 9.0,
        "ingredientOverlap": 7.0,
        "effortBalance": 8.5
      },
      "proteinGrid": [
        { "day": "MON", "proteinType": "CHICKEN" },
        { "day": "TUE", "proteinType": "FISH" },
        ...
      ],
      "effortDistribution": { "easy": 3, "medium": 3, "hard": 1 },
      "warnings": [
        { "title": "Chicken twice this week", "explanation": "...", "affectedDays": ["MON", "THU"] }
      ]
    }
    
  • All display data in one response — C3 should not need multiple API calls to render.

Computation strategy

  • The variety score algorithm needs to be well-defined before implementation. The three sub-scores (protein diversity, ingredient overlap, effort balance) each need a formula. Are these documented anywhere beyond the issue?
  • On-demand calculation is fine for v1 — the algorithm runs in-memory over a max of 7 meal slots. No need for caching unless profiling reveals it's slow.
  • The score must be recalculated when: a meal is added/removed from the plan, a swap occurs, or a recipe's details change. Currently C1, C2, and J4 all trigger recalculation — the /variety-score endpoint should always return the current computed value, not a stale cached one.

Sub-score formulas — need agreement before implementation

  • Protein diversity: how is this measured? Distinct protein types / 7 days? Penalty for same protein on adjacent days?
  • Ingredient overlap: cross-meal ingredient deduplication rate? A lower overlap = higher score?
  • Effort balance: distance from a 3/3/1 or similar "ideal" distribution?
  • "Repeated proteins: 2px yellow border highlight" on the grid — is "repeated" defined as same protein type appearing on any 2+ days, or specifically adjacent days? This affects the warning logic.

Planner-only access

  • C3 is planner-only per the spec. GET /api/week-plans/{weekPlanId}/variety-score must return 403 for members.

Questions

  • Is the variety score algorithm documented in a spec file? I want to implement the exact formula, not invent one.
  • Does the proteinType field come from a recipe attribute or is it derived from ingredients? If derived, what's the mapping (e.g., recipes containing chicken breast → CHICKEN protein type)?
  • Are warnings generated server-side (preferred — single source of truth) or computed client-side from the proteinGrid data?
  • What's the expected behavior when fewer than 7 days are planned — partial week? Does the score denominator change, or are empty days treated as "no protein / no effort"?
## 🛠️ Backend Engineer — Variety Score API C3 is a read-heavy, computation-heavy screen. The key backend question is whether the variety score and sub-scores are computed on-demand or cached. Let me work through the design. **Variety score API — what does the response look like?** - `GET /api/week-plans/{weekPlanId}/variety-score` should return: ```json { "total": 8.2, "description": "Great variety", "subScores": { "proteinDiversity": 9.0, "ingredientOverlap": 7.0, "effortBalance": 8.5 }, "proteinGrid": [ { "day": "MON", "proteinType": "CHICKEN" }, { "day": "TUE", "proteinType": "FISH" }, ... ], "effortDistribution": { "easy": 3, "medium": 3, "hard": 1 }, "warnings": [ { "title": "Chicken twice this week", "explanation": "...", "affectedDays": ["MON", "THU"] } ] } ``` - All display data in one response — C3 should not need multiple API calls to render. **Computation strategy** - The variety score algorithm needs to be well-defined before implementation. The three sub-scores (protein diversity, ingredient overlap, effort balance) each need a formula. Are these documented anywhere beyond the issue? - On-demand calculation is fine for v1 — the algorithm runs in-memory over a max of 7 meal slots. No need for caching unless profiling reveals it's slow. - The score must be recalculated when: a meal is added/removed from the plan, a swap occurs, or a recipe's details change. Currently C1, C2, and J4 all trigger recalculation — the `/variety-score` endpoint should always return the current computed value, not a stale cached one. **Sub-score formulas — need agreement before implementation** - Protein diversity: how is this measured? Distinct protein types / 7 days? Penalty for same protein on adjacent days? - Ingredient overlap: cross-meal ingredient deduplication rate? A lower overlap = higher score? - Effort balance: distance from a 3/3/1 or similar "ideal" distribution? - "Repeated proteins: 2px yellow border highlight" on the grid — is "repeated" defined as same protein type appearing on any 2+ days, or specifically adjacent days? This affects the warning logic. **Planner-only access** - C3 is planner-only per the spec. `GET /api/week-plans/{weekPlanId}/variety-score` must return 403 for members. **Questions** - Is the variety score algorithm documented in a spec file? I want to implement the exact formula, not invent one. - Does the `proteinType` field come from a recipe attribute or is it derived from ingredients? If derived, what's the mapping (e.g., recipes containing chicken breast → CHICKEN protein type)? - Are warnings generated server-side (preferred — single source of truth) or computed client-side from the proteinGrid data? - What's the expected behavior when fewer than 7 days are planned — partial week? Does the score denominator change, or are empty days treated as "no protein / no effort"?
Author
Owner

🧪 QA Engineer — Test Coverage Plan for C3

C3 is a read-only display screen, which makes the frontend testing lighter — but the backend algorithm powering the scores needs rigorous coverage since it's a core business rule.

Backend unit tests — the score algorithm is the heart of this

  • shouldCalculateFullScoreForCompleteWeekWithDiverseProtein()
  • shouldReturnZeroScoreForEmptyWeekPlan()
  • shouldReturnPartialScoreWhenOnlySomeDaysArePlanned()
  • shouldDetectRepeatedProteinAndFlagInWarnings()
  • shouldDetectHighIngredientOverlapAndFlagInWarnings()
  • shouldCalculateEffortBalanceCorrectlyForAllEasyWeek()
  • shouldCalculateEffortBalanceCorrectlyForAllHardWeek()
  • shouldIncludeAffectedDaysInWarningPayload() — so the UI can highlight the right days

These are pure business logic tests — no DB needed, just a service method that takes a list of meal slots and returns a score response.

Backend integration tests

  • shouldReturn403WhenMemberRequestsVarietyScore() — planner-only
  • shouldReturn404WhenWeekPlanDoesNotExist()
  • shouldReturnConsistentScoreAfterSwapOperation() — swap a meal, then GET score, verify it reflects the new plan
  • shouldReturnCorrectProteinGridForWeekPlan() — verify the 7-element array matches the actual planned meals

Frontend component tests

  • VarietyScoreHero: renders correct score number, correct color description text, progress bar width proportional to score
  • ScoreBreakdownTable: renders all 3 sub-score rows with correct values
  • WarningCards: renders N cards when N warnings are present, renders nothing (or a "no warnings" state) when warnings array is empty
  • ProteinGrid: renders 7 cells, applies yellow border to repeated protein types
  • EffortBar: segments are proportional, labels show correct counts, handles edge case where one effort type is 0
  • Empty state: renders correctly when the week has no planned meals

Parameterized test cases for the score algorithm

  • The algorithm should be exercised with a @ParameterizedTest across representative week configurations:
    • All same protein (worst case)
    • All different proteins (best case)
    • Mixed: 5 different + 2 repeated on adjacent days
    • Partial week: 3/7 days planned
    • All hard effort vs. balanced

Questions

  • Is there a defined score range for each description? (e.g., 0–3 = "Poor variety", 4–6 = "Decent", 7–9 = "Good", 10 = "Excellent"?) We need to test the boundary values.
  • What's the exact formula for each sub-score? Without it, I can't write meaningful assertions — I'd just be testing that "some number comes back", which is pointless.
  • Are warnings deterministic — given the same input, do they always produce the same output? If yes, they're easy to unit test. If there's any randomness (e.g., which day to suggest swapping when multiple options exist), we need to seed that.
## 🧪 QA Engineer — Test Coverage Plan for C3 C3 is a read-only display screen, which makes the frontend testing lighter — but the backend algorithm powering the scores needs rigorous coverage since it's a core business rule. **Backend unit tests — the score algorithm is the heart of this** - `shouldCalculateFullScoreForCompleteWeekWithDiverseProtein()` - `shouldReturnZeroScoreForEmptyWeekPlan()` - `shouldReturnPartialScoreWhenOnlySomeDaysArePlanned()` - `shouldDetectRepeatedProteinAndFlagInWarnings()` - `shouldDetectHighIngredientOverlapAndFlagInWarnings()` - `shouldCalculateEffortBalanceCorrectlyForAllEasyWeek()` - `shouldCalculateEffortBalanceCorrectlyForAllHardWeek()` - `shouldIncludeAffectedDaysInWarningPayload()` — so the UI can highlight the right days These are pure business logic tests — no DB needed, just a service method that takes a list of meal slots and returns a score response. **Backend integration tests** - `shouldReturn403WhenMemberRequestsVarietyScore()` — planner-only - `shouldReturn404WhenWeekPlanDoesNotExist()` - `shouldReturnConsistentScoreAfterSwapOperation()` — swap a meal, then GET score, verify it reflects the new plan - `shouldReturnCorrectProteinGridForWeekPlan()` — verify the 7-element array matches the actual planned meals **Frontend component tests** - `VarietyScoreHero`: renders correct score number, correct color description text, progress bar width proportional to score - `ScoreBreakdownTable`: renders all 3 sub-score rows with correct values - `WarningCards`: renders N cards when N warnings are present, renders nothing (or a "no warnings" state) when warnings array is empty - `ProteinGrid`: renders 7 cells, applies yellow border to repeated protein types - `EffortBar`: segments are proportional, labels show correct counts, handles edge case where one effort type is 0 - Empty state: renders correctly when the week has no planned meals **Parameterized test cases for the score algorithm** - The algorithm should be exercised with a `@ParameterizedTest` across representative week configurations: - All same protein (worst case) - All different proteins (best case) - Mixed: 5 different + 2 repeated on adjacent days - Partial week: 3/7 days planned - All hard effort vs. balanced **Questions** - Is there a defined score range for each description? (e.g., 0–3 = "Poor variety", 4–6 = "Decent", 7–9 = "Good", 10 = "Excellent"?) We need to test the boundary values. - What's the exact formula for each sub-score? Without it, I can't write meaningful assertions — I'd just be testing that "some number comes back", which is pointless. - Are warnings deterministic — given the same input, do they always produce the same output? If yes, they're easy to unit test. If there's any randomness (e.g., which day to suggest swapping when multiple options exist), we need to seed that.
Author
Owner

🔐 Sable — Security Engineer

C3 is read-only and planner-only, which makes the attack surface smaller than D1 or J4. But there are a few things worth flagging given the score algorithm and data exposure.

Planner-only access — the obvious check

  • GET /api/week-plans/{weekPlanId}/variety-score must enforce planner role at the service layer. A member should not be able to access C3 data even by direct API call.
  • The weekPlanId must belong to the caller's household. A planner from Household A should not be able to query the variety score for Household B's week plan by guessing or enumerating UUIDs.

Data exposure in the variety score response

  • The protein grid response includes per-day protein types — this indirectly reveals the meal plan structure (what's planned each day) to whoever can read this endpoint. Since C3 is planner-only, this is appropriately gated.
  • Warnings with affectedDays and meal details — make sure warning text doesn't accidentally expose recipe names or ingredient details that the score algorithm shouldn't be surfacing (e.g., if warnings are generated from raw DB queries).

The score algorithm as a potential information oracle

  • Somewhat theoretical for v1, but worth noting: if the score algorithm is ever made accessible to members (even partially), it could reveal planner-only plan details indirectly. Keep this endpoint firmly in the planner-only gate.

Error response information leakage

  • If the weekPlanId doesn't exist, return 404. Do not return different error messages for "plan not found" vs "plan exists but belongs to another household" — the latter would confirm existence and enable enumeration.
  • Standardize: any unauthorized access to a resource in another household should return 404 (not 403), to avoid confirming the resource exists.

No user-generated content on C3

  • C3 is purely derived data — no user input is rendered. No {@html} risk, no injection surface. This is good from a frontend security perspective.

Questions

  • The variety score is "always visible on C1" per Kai's persona context — does C1 show the total score for members too, or only for planners? If members see the score on C1, is there a stripped-down score endpoint that's accessible to members?
  • Is the score algorithm deterministic and versioned? If the formula changes, historical scores should either be recalculated or flagged as "calculated with an older version". Not a v1 concern, but good to design for.
  • Are any warning messages user-influenced (e.g., derived from recipe names)? If so, ensure they're escaped properly before rendering.
## 🔐 Sable — Security Engineer C3 is read-only and planner-only, which makes the attack surface smaller than D1 or J4. But there are a few things worth flagging given the score algorithm and data exposure. **Planner-only access — the obvious check** - `GET /api/week-plans/{weekPlanId}/variety-score` must enforce planner role at the service layer. A member should not be able to access C3 data even by direct API call. - The `weekPlanId` must belong to the caller's household. A planner from Household A should not be able to query the variety score for Household B's week plan by guessing or enumerating UUIDs. **Data exposure in the variety score response** - The protein grid response includes per-day protein types — this indirectly reveals the meal plan structure (what's planned each day) to whoever can read this endpoint. Since C3 is planner-only, this is appropriately gated. - Warnings with `affectedDays` and meal details — make sure warning text doesn't accidentally expose recipe names or ingredient details that the score algorithm shouldn't be surfacing (e.g., if warnings are generated from raw DB queries). **The score algorithm as a potential information oracle** - Somewhat theoretical for v1, but worth noting: if the score algorithm is ever made accessible to members (even partially), it could reveal planner-only plan details indirectly. Keep this endpoint firmly in the planner-only gate. **Error response information leakage** - If the `weekPlanId` doesn't exist, return 404. Do not return different error messages for "plan not found" vs "plan exists but belongs to another household" — the latter would confirm existence and enable enumeration. - Standardize: any unauthorized access to a resource in another household should return 404 (not 403), to avoid confirming the resource exists. **No user-generated content on C3** - C3 is purely derived data — no user input is rendered. No `{@html}` risk, no injection surface. This is good from a frontend security perspective. **Questions** - The variety score is "always visible on C1" per Kai's persona context — does C1 show the total score for members too, or only for planners? If members see the score on C1, is there a stripped-down score endpoint that's accessible to members? - Is the score algorithm deterministic and versioned? If the formula changes, historical scores should either be recalculated or flagged as "calculated with an older version". Not a v1 concern, but good to design for. - Are any warning messages user-influenced (e.g., derived from recipe names)? If so, ensure they're escaped properly before rendering.
Author
Owner

🎨 Atlas — UI/UX Designer

C3 is a data-dense screen and the design needs to communicate nutritional intelligence without overwhelming. The existing spec has a solid skeleton — here's what needs to be locked down before implementation.

The big score number — color description

  • "Color description" next to the score (e.g., "Great variety", "Decent") needs a defined mapping. My recommendation for a 0–10 scale:
    • 0–3: --color-error text, "Needs improvement"
    • 4–6: --yellow-text text, "Getting there"
    • 7–8: --color-text, "Good variety"
    • 9–10: --green-dark text, "Excellent variety"
  • The Fraunces weight 300 at 56px (mobile) / 72px (desktop) is correct per our display heading rules. Do not go above weight 600.

Progress bar — 120px fixed width is intentional but needs clarification

  • A 120px fixed-width bar on mobile is a deliberate "gauge" pattern, not a full-width progress indicator. It reads as a compact summary metric, not a loading bar. This is fine IF it's visually clear it represents a scale (0–10), not progress.
  • The bar should have a --color-border track background and --yellow fill (matching the issue spec). Ensure the track has --radius-full and the fill bar does too.

Protein grid — color tokens needed

  • The grid cells are "colored by protein type" — but we have no protein type color tokens in the design system. I need to define these before implementation. My proposal:
    • Chicken/Poultry: warm amber (--protein-poultry)
    • Fish/Seafood: blue-teal (--protein-fish)
    • Beef/Pork: deep red (--protein-red-meat)
    • Vegetarian: --green-tint / --green-dark text
    • Vegan: lighter green variant
    • Legumes: earthy brown
  • These need to be added to the design system as semantic tokens, not hardcoded colors.

Effort bar — label placement for narrow segments

  • When a segment is very narrow (e.g., "Hard ×1" in a 7-day week = ~14% of bar width), the label text won't fit inside. Labels should be positioned below the bar in all cases for consistency, not inside the segments. This also improves readability against colored backgrounds.

Warning cards — --radius-lg

  • The spec says --radius-lg for warning cards — that's 10px. This matches our "elevated/notable surface" pattern. Good. Ensure consistent 16px horizontal padding and 12px vertical padding inside the card.
  • Warning cards should have a --yellow-text title (13px/500) and --color-muted explanation text (13px/400). The title should summarize the problem; the explanation should give context.

Questions

  • On mobile, are the sub-score rows in the breakdown table displayed as "Protein diversity · 9/10" in a single line, or two-line label + value?
  • Should the warning cards link to C1 at the specific day, or are they purely informational on C3? If they link, that's a meaningful UX improvement (1 tap to fix the warned issue).
  • Is there a maximum number of warnings to display? A week could theoretically trigger 5+ warnings — should they be truncated or all shown?
## 🎨 Atlas — UI/UX Designer C3 is a data-dense screen and the design needs to communicate nutritional intelligence without overwhelming. The existing spec has a solid skeleton — here's what needs to be locked down before implementation. **The big score number — color description** - "Color description" next to the score (e.g., "Great variety", "Decent") needs a defined mapping. My recommendation for a 0–10 scale: - 0–3: `--color-error` text, "Needs improvement" - 4–6: `--yellow-text` text, "Getting there" - 7–8: `--color-text`, "Good variety" - 9–10: `--green-dark` text, "Excellent variety" - The Fraunces weight 300 at 56px (mobile) / 72px (desktop) is correct per our display heading rules. Do not go above weight 600. **Progress bar — 120px fixed width is intentional but needs clarification** - A 120px fixed-width bar on mobile is a deliberate "gauge" pattern, not a full-width progress indicator. It reads as a compact summary metric, not a loading bar. This is fine IF it's visually clear it represents a scale (0–10), not progress. - The bar should have a `--color-border` track background and `--yellow` fill (matching the issue spec). Ensure the track has `--radius-full` and the fill bar does too. **Protein grid — color tokens needed** - The grid cells are "colored by protein type" — but we have no protein type color tokens in the design system. I need to define these before implementation. My proposal: - Chicken/Poultry: warm amber (`--protein-poultry`) - Fish/Seafood: blue-teal (`--protein-fish`) - Beef/Pork: deep red (`--protein-red-meat`) - Vegetarian: `--green-tint` / `--green-dark` text - Vegan: lighter green variant - Legumes: earthy brown - These need to be added to the design system as semantic tokens, not hardcoded colors. **Effort bar — label placement for narrow segments** - When a segment is very narrow (e.g., "Hard ×1" in a 7-day week = ~14% of bar width), the label text won't fit inside. Labels should be positioned below the bar in all cases for consistency, not inside the segments. This also improves readability against colored backgrounds. **Warning cards — `--radius-lg`** - The spec says `--radius-lg` for warning cards — that's 10px. This matches our "elevated/notable surface" pattern. Good. Ensure consistent 16px horizontal padding and 12px vertical padding inside the card. - Warning cards should have a `--yellow-text` title (13px/500) and `--color-muted` explanation text (13px/400). The title should summarize the problem; the explanation should give context. **Questions** - On mobile, are the sub-score rows in the breakdown table displayed as "Protein diversity · 9/10" in a single line, or two-line label + value? - Should the warning cards link to C1 at the specific day, or are they purely informational on C3? If they link, that's a meaningful UX improvement (1 tap to fix the warned issue). - Is there a maximum number of warnings to display? A week could theoretically trigger 5+ warnings — should they be truncated or all shown?
Sign in to join this conversation.