Skip to content

feat(epic): custom KPI and KRI#2932

Merged
ab-smith merged 92 commits into
mainfrom
custom_metrics
Dec 20, 2025
Merged

feat(epic): custom KPI and KRI#2932
ab-smith merged 92 commits into
mainfrom
custom_metrics

Conversation

@ab-smith
Copy link
Copy Markdown
Contributor

@ab-smith ab-smith commented Nov 23, 2025

  • metrics definition, metric instance and metrics samples models
  • support metric definition through a library
  • display metric instance tracking within its page
  • dashboards model and DnD authoring
  • support markdown widget
  • add builtin metrics model
  • backfill command, included in startup.py
  • add import metric definition button
  • add builtin metrics widgets
  • manage min size for widgets
  • add feature flag
  • detect stale metrics based on expected frequency
  • add assigned metrics to the assignment page
  • check name uniquness for metric instance and dashboards

Summary by CodeRabbit

  • New Features

    • Full metrology module: metric definitions & instances, custom and builtin metric samples, dashboards and widgets, REST APIs and UI pages; dashboard editor with drag-and-drop, multiple chart types, KPI cards and tables; metric import, daily snapshots, backfill and cleanup utilities.
  • Documentation

    • Added Core Cybersecurity Metrics library (25 predefined metrics) and new localization strings (en/fr).
  • Chores

    • Metrology app enabled, routes/permissions/feature-flag added; default builtin-metrics retention set to 2 years.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Nov 23, 2025

Walkthrough

Adds a new metrology app: DB models, REST endpoints, import/conversion support, management commands, periodic tasks, startup provisioning, frontend UI (forms/pages/components), translations, and permissions; integrates metric snapshots into existing assessment save paths and library import flows.

Changes

Cohort / File(s) Change Summary
App registration & settings
backend/ciso_assistant/settings.py, enterprise/backend/enterprise_core/settings.py, backend/metrology/apps.py
Registered new Django app metrology in INSTALLED_APPS and added AppConfig.
DB schema / models / migration
backend/metrology/models.py, backend/metrology/migrations/0001_initial.py
Added models: MetricDefinition, MetricInstance, BuiltinMetricSample, CustomMetricSample, Dashboard, DashboardWidget with fields, relations, constraints and helpers.
API layer (views/serializers/urls)
backend/metrology/views.py, backend/metrology/serializers.py, backend/metrology/urls.py
New DRF ViewSets, serializers and router registrations for metrology resources; custom actions for choices, supported models, object refresh and queries.
Library import & conversion
backend/library/utils.py, tools/convert_library_v2.py, backend/library/libraries/core_cybersecurity_metrics.yaml
Added MetricDefinitionImporter, library importer support for metric_definitions, conversion parsing/validation and a core metrics YAML manifest.
Core integrations & terminology
backend/core/models.py, backend/core/migrations/0120_alter_terminology_field_path.py
Terminology extended with METRIC_UNIT and DEFAULT_METRIC_UNITS; added create_default_metric_units; LibraryUpdater.update_metric_definitions added; assessment/finding save paths call upsert_daily_metrics to emit BuiltinMetricSample snapshots.
Startup, settings & permissions
backend/core/startup.py, backend/global_settings/serializers.py, backend/global_settings/views.py
Added metrology permissions to role lists, metrology feature flag, builtin_metrics_retention_days setting (default 730), invoked default metric unit provisioning and backfill call in startup.
Builtin metrics registry & tasks
backend/metrology/builtin_metrics.py, backend/metrology/tasks.py
Added builtin metrics registry and helper lookups; scheduled Huey tasks for metric staleness checks and retention-based cleanup of builtin samples.
Management commands
backend/metrology/management/commands/backfill_builtin_metrics.py, backend/metrology/management/commands/cleanup_builtin_metrics.py
Added commands to migrate historical metrics, generate snapshots, apply retention, delete orphans, purge and regenerate snapshots.
Library import wiring
backend/library/utils.py
Included metric_definitions in OBJECT_FIELDS and import flow; added MetricDefinitionImporter class and validation/import logic.
Frontend: components & forms
frontend/src/lib/components/Chart/*, frontend/src/lib/components/Forms/ModelForm/*, frontend/src/lib/components/OrderedEntryList.svelte
Added DashboardWidgetChart, MetricSampleChart, OrderedEntryList and multiple form components for metric definitions, instances, samples, dashboards and widgets (text/builtin/custom).
Frontend: pages, routes & load/actions
frontend/src/routes/(app)/(internal)/*dashboards*, .../metric-instances/*, .../libraries/*, frontend/src/routes/(app)/(internal)/[model=urlmodel]/[id=uuid]/edit/+layout.server.ts
Added dashboard detail/layout pages and editor, metric-instance detail page, library view integration, load/actions for widget management and snapshot fetching; ModelForm receives supportedModels prop.
Frontend: wiring, utils & table schemas
frontend/src/lib/utils/{crud,schemas,table,types,load,sidebar-config}.ts
Added URL_MODEL entries, schemas, list/table configurations, initial-data nested extraction, and metrology feature flag propagation for sidebar visibility.
Frontend: i18n & nav
frontend/messages/en.json, frontend/messages/fr.json, frontend/src/lib/components/SideBar/navData.ts, enterprise/frontend/.../navData.ts
Added extensive metrology translations and navigation entries (metric-definitions, metric-instances, dashboards).
Tests & fixtures
backend/test_fixtures.py, backend/app_tests/api/*, backend/core/tests/test_models.py
Added RISK_MATRIX_JSON_DEFINITION fixture and updated tests to use it; adjusted an evidence filename assertion.
Other frontend integrations
multiple frontend route and component files (ModelForm integration, my-assignments panel, experimental ordered-list page, etc.)
Integrated forms, editor UIs, counts, and experimental pages to surface metrology features across the frontend.

Sequence Diagram(s)

sequenceDiagram
    participant YAML as Library YAML
    participant Importer as LibraryImporter
    participant MetricImp as MetricDefinitionImporter
    participant Terminology as Terminology
    participant DB as Database

    YAML->>Importer: provide metric_definitions
    Importer->>MetricImp: init_metric_definitions(list)
    loop each metric
      MetricImp->>Terminology: resolve unit by name (METRIC_UNIT)
      Terminology-->>MetricImp: unit object or None
      MetricImp->>DB: create/update MetricDefinition record
      DB-->>MetricImp: saved
    end
    MetricImp-->>Importer: import complete
Loading
sequenceDiagram
    participant Assessment as Assessment instance (save)
    participant Upserter as upsert_daily_metrics()
    participant BMS as BuiltinMetricSample
    participant Compute as compute_metrics()
    participant DB as Database

    Assessment->>Upserter: save triggers upsert_daily_metrics()
    Upserter->>BMS: update_or_create_snapshot(assessment)
    BMS->>Compute: compute_metrics(assessment)
    Compute->>Compute: run type-specific _compute_* methods
    Compute-->>BMS: metrics dict
    BMS->>DB: get_or_create by (content_type, object_id, date)
    DB-->>BMS: sample created/updated
    BMS-->>Assessment: snapshot persisted
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring extra attention:

  • BuiltinMetricSample.compute_metrics and per-model metric computations and edge cases.
  • Library import (MetricDefinitionImporter) URN normalization, unit resolution and choices parsing.
  • Save-path changes in core models that trigger snapshot generation (side-effects and ordering).
  • Dashboard layout editor (drag/resize), widget persistence and concurrent updates.
  • Management commands (backfill/cleanup) transformation logic and idempotency.
  • Serializer/ViewSet validation for DashboardWidget mutual-exclusivity and content-type/object resolution.
  • Startup provisioning/backfill invocation error handling.

Poem

🐰 I hopped through YAML, forms, and queues,
I planted units, choices, and views,
Dashboards sprout widgets, samples take flight,
Snapshots at dawn, charts glowing bright,
A rabbit's small math — metrics done right.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.22% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(epic): custom KPI and KRI' is specific and clearly indicates the main feature being added (custom KPIs and KRIs), which aligns with the comprehensive metrics system implementation shown in the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch custom_metrics

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e82a631 and 10d616c.

📒 Files selected for processing (1)
  • frontend/tests/functional/domain-import.test.ts (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: build (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
🔇 Additional comments (1)
frontend/tests/functional/domain-import.test.ts (1)

64-64: 30-second timeout is reasonable given the project's 20-second default, but add explanatory comment.

The 30-second timeout is only a 1.5x increase from the project's configured global expect timeout of 20 seconds, making it a modest and justified extension for the slower import and demo data loading operations. This targeted timeout adjustment aligns with Playwright best practices for operation-specific assertions rather than applying a single global setting.

To ensure clarity for future maintainers, add a comment explaining why these operations require the extended timeout (e.g., import operations involve file validation, data processing, and provisioning that can take 10-20 seconds).


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gitguardian
Copy link
Copy Markdown

gitguardian Bot commented Nov 23, 2025

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
While these secrets were previously flagged, we no longer have a reference to the
specific commits where they were detected. Once a secret has been leaked into a git
repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@ab-smith ab-smith added High Value db-migration This branch contains migration files. Caution should be exercised when merging to avoid conflicts. labels Nov 28, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
frontend/src/lib/components/Chart/MetricSampleChart.svelte (1)

21-21: Hardcoded chart_id still causes collisions with multiple chart instances.

This issue was flagged in a previous review but remains unresolved. When multiple MetricSampleChart components render on the same page, they target the same DOM element ID, causing rendering conflicts.

🔎 Proposed fix using crypto.randomUUID()
-	const chart_id = 'metric-sample-chart';
+	const chart_id = `metric-sample-chart-${crypto.randomUUID()}`;
🧹 Nitpick comments (1)
frontend/src/lib/components/Chart/MetricSampleChart.svelte (1)

5-11: Consider stronger typing for Props interface.

The any types for samples and metricDefinition reduce type safety. More specific interfaces would improve maintainability and catch errors at compile time.

💡 Example TypeScript interfaces
interface MetricSample {
  timestamp: string;
  value: string | { choice_index?: number; result?: number };
}

interface ChoiceDefinition {
  name: string;
}

interface Unit {
  name: string;
}

interface MetricDefinition {
  category: 'qualitative' | 'quantitative';
  unit?: Unit;
  choices_definition?: ChoiceDefinition[];
}

interface Props {
  samples?: MetricSample[];
  metricDefinition?: MetricDefinition;
  width?: string;
  height?: string;
  classesContainer?: string;
}
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2b0fb0 and 9ef84da.

📒 Files selected for processing (1)
  • frontend/src/lib/components/Chart/MetricSampleChart.svelte (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: test (3.12)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
  • GitHub Check: build (3.12)
🔇 Additional comments (3)
frontend/src/lib/components/Chart/MetricSampleChart.svelte (3)

22-48: LGTM: Data processing logic is sound.

The component correctly handles both qualitative and quantitative metrics, safely parses JSON values, filters invalid data, and sorts chronologically. Error handling for malformed data is appropriate.


140-146: ✅ Memory leak fix verified.

The resize listener now correctly uses the same function reference (handleResize) for both addEventListener and removeEventListener. Chart disposal is also properly handled. This addresses the issue flagged in the previous review.


69-70: No issues found - indexing is correctly 1-based throughout.

Backend code explicitly states "Convert 1-based index to 0-based for array access" (backend/metrology/models.py:276), confirming choice_index values are intentionally 1-based. Frontend code correctly assumes this: lines 69-70 and 98 use value - 1 to access the choice names array, and the y-axis is configured with min: 1 (lines 93-94). The indexing is consistent and correct across the entire codebase. No changes needed.

Likely an incorrect or invalid review comment.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte (2)

94-96: Consider validating chart_type before use.

While the fallback handles undefined chartType, consider adding validation where getMinDimensions is called to ensure widget.chart_type is defined. This would make debugging easier if widgets arrive without a chart type.


124-168: Optional: Extract common modal logic.

The three modal functions share similar structure. Consider extracting a helper function to reduce duplication:

🔎 Suggested refactor
function openAddWidgetModal(type: 'custom' | 'builtin' | 'text') {
  const configs = {
    custom: {
      form: widgetCreateForm,
      model: widgetModel,
      formAction: '/dashboard-widgets?/create',
      title: 'addCustomWidget'
    },
    builtin: {
      form: builtinWidgetCreateForm,
      model: builtinWidgetModel,
      formAction: '/dashboard-builtin-widgets?/create',
      title: 'addBuiltinWidget',
      supportedModels
    },
    text: {
      form: textWidgetCreateForm,
      model: textWidgetModel,
      formAction: '/dashboard-widgets?/create',
      title: 'addTextWidget'
    }
  };
  
  const config = configs[type];
  modalStore.trigger({
    type: 'component',
    component: {
      ref: CreateModal,
      props: {
        form: config.form,
        model: config.model,
        formAction: config.formAction,
        ...(config.supportedModels && { supportedModels: config.supportedModels })
      }
    },
    title: safeTranslate(config.title)
  });
}
frontend/messages/en.json (2)

2708-2773: Minor wording fixes for metric instances and layout tab

Two small UX-facing text issues worth tightening up:

  • Line 2713: "metricInstances": "Metrics Instances" is grammatically awkward; “Metric instances” reads more naturally.
  • Line 2759: "editLayout": "Widgets" is slightly confusing given the key name; if this is the tab where layout is edited, “Layout” would be clearer and also align better with the French translation.
Proposed wording tweaks
- "metricInstance": "Metric Instance",
- "metricInstances": "Metrics Instances",
+ "metricInstance": "Metric instance",
+ "metricInstances": "Metric instances",

- "editLayout": "Widgets",
+ "editLayout": "Layout",

2911-2925: Ordered-entry list labels are fine; optional casing polish

The ordered-entry list keys are clear and self‑consistent. If you want to fully align with the rest of the UI’s sentence‑case buttons, you could optionally change “Add New Entry” to “Add new entry”, but it’s not required.

Optional stylistic tweak
- "orderedEntryListAddNewEntry": "Add New Entry",
+ "orderedEntryListAddNewEntry": "Add new entry",
frontend/messages/fr.json (2)

2283-2283: metricsDefinitions label could be made more explicit (optional)

"metricsDefinitions": "Définitions" is understandable but a bit generic; if this appears out of context, “Définitions de métriques” would be clearer and closer to the English “Metrics Definitions”. Not mandatory, but worth considering.

Optional clarification
- "metricsDefinitions": "Définitions",
+ "metricsDefinitions": "Définitions de métriques",

2931-3018: Align layout tab label and slightly clarify metric definitions (FR block)

Overall this block is well translated and matches the new metrics/dashboard features. A couple of small improvements:

  1. Tab label alignment with EN:

    • EN suggestion was to use "editLayout": "Layout" for the layout-editing tab.
    • FR currently has "editLayout": "Modifier la disposition", which reads more like an action than a short tab label, and doesn’t mirror the noun form.
    • Consider shortening to just “Disposition” so both languages use a concise section/tab title.
  2. Metric definition labels (optional):

    • "metricDefinitions": "Définitions" and "metricsDefinitions": "Définitions" are both quite generic.
    • If these labels surface in places where context isn’t obvious, “Définitions de métriques” (for one or both) would better match the English and disambiguate.
Proposed French tweaks
- "editLayout": "Modifier la disposition",
+ "editLayout": "Disposition",

- "metricDefinitions": "Définitions",
+ "metricDefinitions": "Définitions de métriques",

(Optionally make the same change for "metricsDefinitions" if you want both to read explicitly “Définitions de métriques”.)

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ef84da and 98f4495.

📒 Files selected for processing (3)
  • frontend/messages/en.json (6 hunks)
  • frontend/messages/fr.json (4 hunks)
  • frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Frontend Linters
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte

[warning] 1-1: Code style issues found in the above file. Run 'pnpm exec prettier --write' to fix.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
  • GitHub Check: build (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
🔇 Additional comments (6)
frontend/messages/en.json (4)

68-69: Metrology section label and description look consistent

Label and description are clear and fit the new metrics/KPI feature area; no issues spotted.


266-266: Import metric definitions action is well named

“Import metric definitions” matches existing import* patterns and is self‑explanatory.


1873-1874: displayValue / currentValue naming is coherent

These labels are consistent with the existing value key and with how metric values are typically shown; nothing to adjust here.


3014-3018: Cascade hint and metric catalog descriptions are clear

cascadeAffectedHint, count, and the three *Description keys read well and explain the behavior/sections clearly; no changes needed.

frontend/messages/fr.json (2)

67-68: Metrology section translations are appropriate

“Métriques” and its description accurately capture the new metrics/KPI area; terminology fits the rest of the app.


258-258: Import metric definitions translation is accurate

“Importer des définitions de métriques” matches the English intent and existing import labels.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
frontend/src/lib/utils/table.ts (1)

2335-2413: Endpoint prefix inconsistency remains unresolved (duplicate issue).

This is a duplicate of a previous review comment. Several metrology filter endpoints are still missing the required metrology/ prefix:

  • Line 2352: 'metric-definitions/category' → should be 'metrology/metric-definitions/category'
  • Line 2372: 'metric-definitions/provider' → should be 'metrology/metric-definitions/provider'
  • Line 2386: 'metric-definitions' → should be 'metrology/metric-definitions'
  • Line 2394: 'metric-instances/status' → should be 'metrology/metric-instances/status'

Note that dashboard-related endpoints (lines 2446, 2454, 2462, 2480, 2495) already correctly use the metrology/ prefix, showing this can be implemented consistently.

🧹 Nitpick comments (1)
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts (1)

86-98: Consider parallelizing select option fetches for faster page loads.

The current implementation fetches select options sequentially, which works but is slower than necessary. For dashboards with multiple select fields, this adds cumulative latency.

🔎 Refactor to use Promise.all for parallel fetching
-	for (const selectField of selectFields) {
-		if (selectField.detail) continue;
-		const url = `${BASE_API_URL}/${widgetModel.endpointUrl}/${selectField.field}/`;
-		const response = await event.fetch(url);
-		if (response.ok) {
-			selectOptions[selectField.field] = await response.json().then((data: Record<string, any>) =>
-				Object.entries(data).map(([key, value]) => ({
-					label: value,
-					value: selectField.valueType === 'number' ? parseInt(key) : key
-				}))
-			);
-		}
-	}
+	await Promise.all(
+		selectFields
+			.filter((selectField) => !selectField.detail)
+			.map(async (selectField) => {
+				const url = `${BASE_API_URL}/${widgetModel.endpointUrl}/${selectField.field}/`;
+				const response = await event.fetch(url);
+				if (response.ok) {
+					selectOptions[selectField.field] = await response.json().then((data: Record<string, any>) =>
+						Object.entries(data).map(([key, value]) => ({
+							label: value,
+							value: selectField.valueType === 'number' ? parseInt(key) : key
+						}))
+					);
+				}
+			})
+	);
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 98f4495 and 213fa59.

📒 Files selected for processing (4)
  • frontend/messages/fr.json (4 hunks)
  • frontend/src/lib/utils/table.ts (1 hunks)
  • frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts (1 hunks)
  • frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-11-17T11:09:18.488Z
Learnt from: monsieurswag
Repo: intuitem/ciso-assistant-community PR: 2860
File: frontend/src/routes/(app)/(internal)/[model=urlmodel]/[id=uuid]/+page.server.ts:0-0
Timestamp: 2025-11-17T11:09:18.488Z
Learning: In frontend/src/routes/(app)/(internal)/[model=urlmodel]/[id=uuid]/+page.server.ts, the select action performs batch PATCH operations in Promise.all without transaction semantics. Partial failures (e.g., one object fails to update due to race conditions) are acceptable and should not trigger a fail() return, as most objects will still be updated successfully. Console warnings are sufficient for logging failures.

Applied to files:

  • frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts
  • frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte
📚 Learning: 2025-09-18T10:42:17.597Z
Learnt from: ab-smith
Repo: intuitem/ciso-assistant-community PR: 2548
File: frontend/src/lib/components/ModelTable/LibraryOverview.svelte:1-22
Timestamp: 2025-09-18T10:42:17.597Z
Learning: The frontend of the ciso-assistant-community project is based on Svelte 5, so Svelte 5 runes like $props(), $derived, $state, etc. are expected and should not be flagged as issues.

Applied to files:

  • frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte
🧬 Code graph analysis (1)
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts (5)
frontend/src/lib/utils/load.ts (1)
  • loadDetail (68-236)
frontend/src/lib/utils/crud.ts (2)
  • getModelInfo (2451-2457)
  • urlParamModelSelectFields (2478-2480)
frontend/src/lib/utils/constants.ts (1)
  • BASE_API_URL (4-8)
frontend/src/lib/utils/schemas.ts (1)
  • modelSchema (1481-1483)
frontend/src/lib/utils/actions.ts (2)
  • nestedDeleteFormAction (251-256)
  • nestedWriteFormAction (176-196)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: build (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: build_community_frontend
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: test (3.12)
  • GitHub Check: Analyze (python)
🔇 Additional comments (9)
frontend/messages/fr.json (1)

67-68: Well-implemented French translations for metrology features.

The French translations for the new metrology subsystem are professionally done and consistent with existing patterns:

  • Core navigation and descriptions properly translated
  • Technical terminology appropriately handled (e.g., "widget", "dashboard" used as common French tech terms)
  • Grammar and sentence structure are correct throughout
  • The shortening of "metricsDefinitions" from "Définitions des métriques" to "Définitions" (line 2283) is reasonable for UI space constraints in navigation

The comprehensive set of ~90 new translation keys properly covers all metrology features including metrics, dashboards, widgets, samples, and related UI elements.

Also applies to: 258-258, 2283-2283, 2931-3018

frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte (5)

1-117: LGTM: Setup and initialization are well-structured.

The component initialization follows Svelte 5 best practices with appropriate use of runes. Grid constants are clearly defined, and the modal subscription pattern correctly cleans up on unmount.


188-257: LGTM: Drag-and-drop logic correctly handles positioning and bounds.

The drag offset calculation prevents visual jumps, and position updates are properly constrained to grid boundaries with appropriate bounds checking.


259-273: LGTM: Resize logic correctly enforces dimension constraints.

The function properly enforces both minimum dimensions (per chart type) and maximum dimensions (grid boundaries), preventing widgets from exceeding grid bounds.


276-321: LGTM: Grid calculations and utilities are correctly implemented.

The derived values and helper functions properly manage grid state and generate appropriate CSS positioning. The row management logic allows for flexible canvas sizing.


324-586: Template rendering is well-structured with good UX patterns.

The layout editor UI provides clear visual feedback with grid backgrounds, hover states, and intuitive controls. Empty state handling guides users to add their first widget.

Note: Accessibility improvements for drag-and-drop and resize operations, memory leak prevention, and formatting fixes are already covered in previous review comments.

frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts (3)

11-45: LGTM: Dashboard and widget form initialization is correct.

The shallow copy pattern properly prevents mutation of shared model cache, and the firstFreeRow calculation correctly identifies the next available position for new widgets.


46-81: LGTM: Widget form preparation follows consistent patterns.

Both text and builtin widget forms use the same shallow copy approach and provide appropriate default values for their respective widget types.


133-175: LGTM: Layout save action correctly handles batch widget updates.

The sequential PATCH approach with error collection aligns with the codebase pattern of accepting partial failures in batch operations. Error handling appropriately captures individual failures while allowing other updates to proceed.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
frontend/messages/fr.json (1)

2931-3017: Recommend native French speaker review for new metrology translations.

This PR adds approximately 86 new translation keys for the metrology feature (metrics, dashboards, widgets, KPIs). The technical aspects are solid:

  • ✓ Valid JSON syntax
  • ✓ Consistent key naming conventions
  • ✓ All keys have French translations (no untranslated English)
  • ✓ No duplicate keys

However, given the volume and complexity of the new translations, I recommend having a native French speaker review these for:

  • Translation accuracy and naturalness
  • Consistency with existing terminology throughout the application
  • Technical term choices (e.g., "widget", "dashboard" usage in French contexts)
  • Help text clarity and usability

This is especially important for user-facing features like dashboards and metrics where clear communication is critical.

backend/metrology/models.py (2)

289-293: Consider moving unit display mappings to configuration.

Lines 289-292 contain hardcoded unit name transformations (percentage%, request_per_secondRPS). This creates a maintenance burden if new units with special display formats are added.

Consider storing display format in the Terminology model (e.g., add a display_format field) or maintaining a centralized mapping dictionary at the module level.

🔎 Alternative approaches

Option 1: Add display_format to Terminology model

# In Terminology model
display_format = models.CharField(max_length=50, blank=True, help_text="Display format for UI (e.g., '%' for percentage)")

Option 2: Module-level mapping

# At module top
UNIT_DISPLAY_MAP = {
    "percentage": "%",
    "request_per_second": "RPS",
    # Add more as needed
}

# In display_value()
unit = UNIT_DISPLAY_MAP.get(metric_definition.unit.name, metric_definition.unit.name)

403-414: String-based model dispatch is fragile; consider isinstance() checks.

Line 403 uses obj.__class__.__name__ for string comparison to dispatch to different compute methods. This approach is fragile because:

  • Model renames break silently (no errors, just returns empty dict)
  • Typos in string comparisons are not caught by static analysis
  • Less discoverable for developers

If circular imports allow, use isinstance() checks instead:

🔎 Proposed refactor
 @classmethod
 def compute_metrics(cls, obj):
     """
     Compute metrics for the given object based on its type.

     Args:
         obj: The model instance

     Returns:
         Dictionary of computed metrics
     """
-    model_name = obj.__class__.__name__
-
-    if model_name == "ComplianceAssessment":
+    # Import at function level to avoid circular imports if needed
+    from core.models import ComplianceAssessment, RiskAssessment, FindingsAssessment
+    from iam.models import Folder
+    
+    if isinstance(obj, ComplianceAssessment):
         return cls._compute_compliance_assessment_metrics(obj)
-    elif model_name == "RiskAssessment":
+    elif isinstance(obj, RiskAssessment):
         return cls._compute_risk_assessment_metrics(obj)
-    elif model_name == "FindingsAssessment":
+    elif isinstance(obj, FindingsAssessment):
         return cls._compute_findings_assessment_metrics(obj)
-    elif model_name == "Folder":
+    elif isinstance(obj, Folder):
         return cls._compute_folder_metrics(obj)
     else:
         return {}

If circular imports are a concern, consider a registry pattern or keeping the string-based approach but adding a comment explaining the tradeoff.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 213fa59 and 3037d0b.

📒 Files selected for processing (3)
  • backend/metrology/models.py (1 hunks)
  • frontend/messages/fr.json (4 hunks)
  • frontend/src/lib/components/Chart/MetricSampleChart.svelte (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/src/lib/components/Chart/MetricSampleChart.svelte
🧰 Additional context used
🧬 Code graph analysis (1)
backend/metrology/models.py (7)
backend/iam/models.py (2)
  • FolderMixin (331-350)
  • PublishInRootFolderMixin (353-368)
backend/core/base_models.py (2)
  • AbstractBaseModel (8-116)
  • NameDescriptionMixin (119-132)
backend/core/models.py (41)
  • FilteringLabelMixin (192-198)
  • I18nObjectMixin (163-170)
  • Terminology (1260-1713)
  • Category (4421-4436)
  • Category (6744-6748)
  • FieldPath (1265-1271)
  • Meta (114-115)
  • Meta (169-170)
  • Meta (197-198)
  • Meta (202-204)
  • Status (2262-2268)
  • Status (3660-3666)
  • Status (3807-3812)
  • Status (3976-3982)
  • Status (4478-4483)
  • Status (4564-4571)
  • Status (4640-4645)
  • Status (5573-5578)
  • Status (6449-6453)
  • Status (6843-6853)
  • Status (7297-7304)
  • save (2002-2010)
  • save (2126-2129)
  • save (3131-3133)
  • save (3692-3694)
  • save (3775-3782)
  • save (3957-3961)
  • save (4171-4195)
  • save (4556-4558)
  • save (4693-4696)
  • save (4769-4811)
  • save (5538-5568)
  • save (5696-5702)
  • save (6640-6660)
  • save (6837-6839)
  • save (6928-6934)
  • save (7213-7226)
  • delete (1244-1257)
  • delete (5529-5536)
  • metrics (5603-5617)
  • risk (1889-1891)
backend/metrology/views.py (6)
  • category (40-41)
  • status (74-75)
  • collection_frequency (79-80)
  • chart_type (126-127)
  • time_range (131-132)
  • aggregation (136-137)
backend/metrology/serializers.py (12)
  • Meta (19-21)
  • Meta (31-33)
  • Meta (38-40)
  • Meta (66-68)
  • Meta (86-88)
  • Meta (106-108)
  • Meta (117-119)
  • Meta (128-130)
  • Meta (142-151)
  • Meta (354-356)
  • Meta (386-388)
  • Meta (398-411)
backend/global_settings/serializers.py (1)
  • delete (31-34)
backend/global_settings/views.py (1)
  • delete (29-33)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: build_community_frontend
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: Analyze (python)
🔇 Additional comments (5)
frontend/messages/fr.json (2)

67-68: LGTM - New metrology section translations.

The new metrology section keys are well-translated and consistent with the existing file structure.


2283-2283: Verify shortened translation appears in clear context.

The translation for "metricsDefinitions" has been shortened from "Définitions des métriques" to "Définitions". While this saves space, ensure this key is used in UI contexts where it's clear we're referring to metric definitions (e.g., under a "Metrics" or "Metrology" section heading).

If this appears in isolation or in ambiguous contexts, consider using the full form or adding context through the surrounding UI elements.

backend/metrology/models.py (3)

822-906: Excellent validation logic for complex widget configuration.

The clean() method thoroughly validates the three widget modes (text, custom metric, builtin metric) with clear error messages and proper mutual exclusivity checks. The grid position validation also prevents layout issues. Well done!


908-920: Robust implementation with sensible defaults.

The function properly handles missing settings, invalid values, and provides a reasonable default retention period. The max(1, int(retention)) guard ensures the value is always positive.


78-78: No action required—uniqueness validation is properly enforced.

MetricDefinition (line 78), MetricInstance (line 149), and Dashboard (line 602) inherit from AbstractBaseModel, which overrides save() to call clean() before persisting to the database. The clean() method validates uniqueness via is_unique_in_scope() using the fields_to_check attribute. This ensures uniqueness constraints are automatically enforced on every save, whether called directly via ORM or through forms and serializers.

[also_applies_to: 149-149, 602-602]

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
frontend/messages/en.json (1)

3017-3017: Consider punctuation consistency for hint texts.

The cascadeAffectedHint text ends with a period. While this isn't incorrect, hint texts throughout the file show inconsistent punctuation—some end with periods, others don't. For better consistency, consider establishing a convention (e.g., always include periods for complete sentences, omit for fragments).

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3037d0b and 65e6b5e.

📒 Files selected for processing (3)
  • enterprise/frontend/src/lib/components/SideBar/navData.ts (1 hunks)
  • frontend/messages/en.json (6 hunks)
  • frontend/messages/fr.json (4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: test (3.12)
  • GitHub Check: build (3.12)
  • GitHub Check: build_community_frontend
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: Analyze (python)
🔇 Additional comments (6)
frontend/messages/en.json (5)

68-69: LGTM: Metrology context keys added correctly.

The new metrology keys follow the established pattern for top-level navigation items and provide a clear description of the feature's purpose.


266-266: LGTM: Import key follows established pattern.

The importMetricDefinitions key is consistent with other import-related translation keys in this section.


876-877: LGTM: Generic value display keys.

The displayValue and currentValue keys are appropriately placed and provide clear labels for UI elements.


2711-2928: LGTM: Comprehensive metrology feature translation keys.

This large block adds approximately 218 new translation keys for the metrology feature, including:

  • Metric definitions, instances, and samples (custom and builtin)
  • Dashboard and widget management
  • Chart types and configurations
  • Ordered entry list UI components
  • Metric attributes and help text

The keys follow consistent naming conventions (camelCase) and include appropriate singular/plural pairs. The English translations are clear and descriptive.


3018-3021: LGTM: Metric-related description keys.

The count, metricDefinitionsDescription, metricInstancesDescription, and dashboardsDescription keys follow the established pattern for description texts and provide clear explanations of each feature.

frontend/messages/fr.json (1)

67-68: Verify technical terminology with French-speaking users before final approval.

The French translations for metrology/metrics/dashboard features are generally well-executed and consistent:

Verified Strengths:

  • Consistent terminology throughout ("tableau de bord" for dashboard, "métrique" for metric)
  • Clear, instructive help texts (e.g., line 2961's dashboard editor instructions: drag widgets, use resize handles, save changes)
  • Proper French grammar and capitalization
  • Maintains consistency with existing translation patterns

⚠️ Recommendations:

  1. Technical term choices: "widget" is consistently kept in English throughout (lines 2950–2983). While common in French tech contexts, confirm this aligns with your audience's expectations. Consider whether "composant" or "élément" would be more appropriate.

  2. Domain expert review: With 86 new translation keys added (lines 2934–3019), have a French-speaking user with metrics/KPI domain knowledge validate terms like:

    • "higherIsBetter" → "Plus élevé est mieux" (line 2935)
    • "choicesDefinition" → "Définition des choix" (line 2937)
    • "aggregation" → "Agrégation" (line 2943)
  3. Context clarity: Ensure generic translations like "displayValue": "Valeur" (line 2987) are appropriately clear within actual UI context.

Comment thread enterprise/frontend/src/lib/components/SideBar/navData.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
backend/conftest.py (1)

1-70: LGTM! Well-structured shared test fixture.

The RISK_MATRIX_JSON_DEFINITION constant provides a valid, reusable risk matrix structure with consistent Low/Medium/High definitions across probability, impact, and risk dimensions. The 3x3 grid properly maps probability-impact combinations to risk levels.

Optional: Add docstring for clarity

Consider adding a more detailed docstring to explain the structure:

 # Shared test fixtures and constants
 
-# Valid minimal risk matrix json_definition for tests
+"""
+Valid minimal risk matrix json_definition for tests.
+Defines a 3x3 risk matrix with:
+- Probability levels: Low (0), Medium (1), High (2)
+- Impact levels: Low (0), Medium (1), High (2)  
+- Risk levels: Low (0), Medium (1), High (2)
+- Grid: Maps (probability, impact) pairs to risk level indices
+"""
 RISK_MATRIX_JSON_DEFINITION = {
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65e6b5e and 0058bab.

📒 Files selected for processing (6)
  • backend/app_tests/api/test_api_risk_acceptances.py (3 hunks)
  • backend/app_tests/api/test_api_risk_assessments.py (4 hunks)
  • backend/app_tests/api/test_api_risk_scenarios.py (4 hunks)
  • backend/app_tests/conftest.py (1 hunks)
  • backend/conftest.py (1 hunks)
  • backend/core/tests/test_models.py (9 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
backend/core/tests/test_models.py (1)
backend/core/models.py (2)
  • filename (3708-3713)
  • filename (3784-3785)
backend/app_tests/api/test_api_risk_scenarios.py (2)
frontend/src/lib/utils/types.ts (1)
  • RiskMatrix (168-174)
backend/core/models.py (1)
  • RiskMatrix (1827-1916)
backend/app_tests/api/test_api_risk_acceptances.py (3)
frontend/src/lib/utils/types.ts (1)
  • RiskMatrix (168-174)
backend/core/models.py (1)
  • RiskMatrix (1827-1916)
backend/app_tests/conftest.py (1)
  • test (54-67)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: test (3.12)
  • GitHub Check: build_community_frontend
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: build (3.12)
  • GitHub Check: Analyze (python)
🔇 Additional comments (6)
backend/app_tests/conftest.py (1)

9-10: LGTM! Clean re-export of shared test fixture.

The re-export pattern allows app tests to consistently import from conftest without needing to reference the backend module directly. The noqa: F401 comment appropriately suppresses the unused import warning.

backend/app_tests/api/test_api_risk_scenarios.py (1)

15-15: LGTM! Consistent use of shared risk matrix test fixture.

The refactoring replaces inline RiskMatrix creation with the shared RISK_MATRIX_JSON_DEFINITION constant, improving test maintainability and consistency across the test suite.

Also applies to: 139-143, 176-180, 205-209

backend/app_tests/api/test_api_risk_acceptances.py (1)

14-14: LGTM! Consistent application of shared test fixture.

The RiskMatrix creation now uses the shared RISK_MATRIX_JSON_DEFINITION, aligning with the refactoring pattern applied across other risk-related test files.

Also applies to: 144-148, 200-204

backend/core/tests/test_models.py (2)

6-7: LGTM! Comprehensive adoption of shared test fixture.

The import and usage of RISK_MATRIX_JSON_DEFINITION across 7 test locations ensures consistent test data for risk matrix definitions throughout the model tests.

Also applies to: 114-114, 141-141, 246-246, 266-266, 291-291, 319-319, 347-347


72-75: LGTM! Correct fix for Django's file collision handling.

The assertion change properly accounts for Django's behavior of adding unique suffixes to filenames when avoiding collisions. The prefix/suffix check is more robust than strict equality, and the explanatory comment clarifies the intent.

backend/app_tests/api/test_api_risk_assessments.py (1)

7-7: LGTM! Consistent refactoring of unauthenticated test cases.

The RiskMatrix creation in unauthenticated tests now uses the shared RISK_MATRIX_JSON_DEFINITION constant. Note that authenticated tests appropriately use imported library-based risk matrices, which is the correct approach for those test scenarios.

Also applies to: 34-38, 71-75, 98-102

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
backend/metrology/models.py (2)

159-164: Potential N+1 query or AttributeError in save() method.

The save() method checks self.metric_definition_id but then directly accesses self.metric_definition.default_target. If the metric_definition relation isn't already loaded, this could trigger an additional database query for each save operation. In bulk operations, this could lead to N+1 query problems.

🔎 More efficient approach
 def save(self, *args, **kwargs):
     # Inherit default_target from metric_definition if target_value is not set
     if self.target_value is None and self.metric_definition_id:
-        if self.metric_definition.default_target is not None:
-            self.target_value = self.metric_definition.default_target
+        # Only fetch if not already loaded
+        if not hasattr(self, '_metric_definition_cache'):
+            from metrology.models import MetricDefinition
+            metric_def = MetricDefinition.objects.only('default_target').get(
+                id=self.metric_definition_id
+            )
+            if metric_def.default_target is not None:
+                self.target_value = metric_def.default_target
+        elif self.metric_definition.default_target is not None:
+            self.target_value = self.metric_definition.default_target
     super().save(*args, **kwargs)

Alternatively, ensure callers use select_related('metric_definition') when bulk-saving instances.


283-295: Hardcoded unit display mappings reduce maintainability.

Lines 289-292 hardcode unit name to display symbol mappings ("percentage""%", "request_per_second""RPS"). This approach doesn't scale as new units are added and couples display logic to specific terminology names.

💡 Recommended approach

Consider adding a display_symbol field to the Terminology model for metric units, or use a configuration mapping:

Option 1: Add to Terminology model

# In Terminology model
display_symbol = models.CharField(max_length=20, blank=True, null=True)

Option 2: Use a configurable mapping

# At module level or in settings
UNIT_DISPLAY_SYMBOLS = {
    "percentage": "%",
    "request_per_second": "RPS",
    "bytes": "B",
    "milliseconds": "ms",
    # ... extensible without code changes
}

# In display_value()
unit = metric_definition.unit.name
display_symbol = UNIT_DISPLAY_SYMBOLS.get(unit, unit)
return f"{result} {display_symbol}"
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c1caa5a and ac4c2b0.

📒 Files selected for processing (1)
  • backend/metrology/models.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
  • GitHub Check: build (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: Analyze (python)
🔇 Additional comments (2)
backend/metrology/models.py (2)

826-910: Widget validation logic is comprehensive and correct.

The clean() method properly enforces:

  • Text widgets must not have metric fields (lines 831-849)
  • Non-text widgets must have exactly one metric type (lines 851-872)
  • Builtin metrics must have all required fields (lines 874-895)
  • Grid positioning must respect 12-column boundaries (lines 897-909)

The validation correctly handles mutual exclusivity and completeness constraints.


912-924: LGTM: Defensive retention configuration with sensible defaults.

The function correctly:

  • Reads from global settings with a reasonable 2-year default
  • Enforces a minimum of 1 day to prevent data loss
  • Handles all expected exceptions gracefully

Comment thread backend/metrology/models.py
Comment thread backend/metrology/models.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
backend/resilience/serializers.py (1)

126-126: Remove debug print statement.

The print(validated_data) statement is a debug artifact that should be removed before merging. Debug print statements can leak sensitive data to logs and clutter production output.

🔎 Proposed fix
-        print(validated_data)
backend/core/models.py (1)

4836-4879: RiskAssessment.save does not persist new instances (super().save only runs when pk is set)

As written, RiskAssessment.save() only calls super().save() inside if self.pk:. For newly created instances (self.pk is None), the method will skip the transaction.atomic() block entirely and go straight to self.upsert_daily_metrics(), meaning the object is never actually saved to the database. That’s a correctness blocker regardless of the new metrics behavior.

You can fix this by always saving the instance once, and only applying the “matrix_changed” clamping logic on updates:

Proposed fix for RiskAssessment.save
-    def save(self, *args, **kwargs) -> None:
-        old_matrix_id = None
-        if self.pk:
-            old_matrix_id = (
-                RiskAssessment.objects.filter(pk=self.pk)
-                .values_list("risk_matrix_id", flat=True)
-                .first()
-            )
-
-            matrix_changed = (
-                old_matrix_id is not None and old_matrix_id != self.risk_matrix_id
-            )
-
-            with transaction.atomic():
-                super().save(*args, **kwargs)
-
-                if matrix_changed:
-                    probabilities = list(range(len(self.risk_matrix.probability or [])))
-                    impacts = list(range(len(self.risk_matrix.impact or [])))
-
-                    min_prob, max_prob = min(probabilities), max(probabilities)
-                    min_impact, max_impact = min(impacts), max(impacts)
-
-                    fields = [
-                        ("current_proba", min_prob, max_prob),
-                        ("current_impact", min_impact, max_impact),
-                        ("residual_proba", min_prob, max_prob),
-                        ("residual_impact", min_impact, max_impact),
-                        ("inherent_proba", min_prob, max_prob),
-                        ("inherent_impact", min_impact, max_impact),
-                    ]
-
-                    for scenario in self.risk_scenarios.all():
-                        for field_name, min_val, max_val in fields:
-                            value = getattr(scenario, field_name)
-                            if value < 0:  # keep “not rated”
-                                continue
-                            setattr(
-                                scenario, field_name, max(min_val, min(value, max_val))
-                            )
-                        scenario.save()
-
-        self.upsert_daily_metrics()
+    def save(self, *args, **kwargs) -> None:
+        old_matrix_id = None
+        matrix_changed = False
+        if self.pk:
+            old_matrix_id = (
+                RiskAssessment.objects.filter(pk=self.pk)
+                .values_list("risk_matrix_id", flat=True)
+                .first()
+            )
+            matrix_changed = (
+                old_matrix_id is not None and old_matrix_id != self.risk_matrix_id
+            )
+
+        with transaction.atomic():
+            super().save(*args, **kwargs)
+
+            if matrix_changed:
+                probabilities = list(range(len(self.risk_matrix.probability or [])))
+                impacts = list(range(len(self.risk_matrix.impact or [])))
+
+                min_prob, max_prob = min(probabilities), max(probabilities)
+                min_impact, max_impact = min(impacts), max(impacts)
+
+                fields = [
+                    ("current_proba", min_prob, max_prob),
+                    ("current_impact", min_impact, max_impact),
+                    ("residual_proba", min_prob, max_prob),
+                    ("residual_impact", min_impact, max_impact),
+                    ("inherent_proba", min_prob, max_prob),
+                    ("inherent_impact", min_impact, max_impact),
+                ]
+
+                for scenario in self.risk_scenarios.all():
+                    for field_name, min_val, max_val in fields:
+                        value = getattr(scenario, field_name)
+                        if value < 0:  # keep “not rated”
+                            continue
+                        setattr(
+                            scenario,
+                            field_name,
+                            max(min_val, min(value, max_val)),
+                        )
+                    scenario.save()
+
+        self.upsert_daily_metrics()
♻️ Duplicate comments (1)
frontend/src/lib/utils/table.ts (1)

2368-2368: Fix missing "metrology/" prefix on metric filter endpoints.

Several metrology filter endpoints are inconsistent with the rest of the metrology configuration and lack the required metrology/ prefix:

  • Line 2368: metric-definitions/category → should be metrology/metric-definitions/category
  • Line 2388: metric-definitions/provider → should be metrology/metric-definitions/provider
  • Line 2402: metric-definitions → should be metrology/metric-definitions
  • Line 2410: metric-instances/status → should be metrology/metric-instances/status

In contrast, dashboard-related filters at lines 2462, 2470, 2478, 2496, and 2511 correctly use the metrology/ prefix. This inconsistency may cause API routing failures.

🔎 Proposed fix
         category: {
           component: AutocompleteSelect,
           props: {
-            optionsEndpoint: 'metric-definitions/category',
+            optionsEndpoint: 'metrology/metric-definitions/category',
             optionsLabelField: 'label',
             optionsValueField: 'value',
             label: 'category',
             browserCache: 'force-cache',
             multiple: true
           }
         },
         // ... library filter ...
         provider: {
           ...PROVIDER_FILTER,
           props: {
             ...PROVIDER_FILTER.props,
-            optionsEndpoint: 'metric-definitions/provider'
+            optionsEndpoint: 'metrology/metric-definitions/provider'
           }
         },
         metric_definition: {
           component: AutocompleteSelect,
           props: {
-            optionsEndpoint: 'metric-definitions',
+            optionsEndpoint: 'metrology/metric-definitions',
             label: 'metricDefinition',
             multiple: true
           }
         },
         status: {
           component: AutocompleteSelect,
           props: {
-            optionsEndpoint: 'metric-instances/status',
+            optionsEndpoint: 'metrology/metric-instances/status',
             optionsLabelField: 'label',
             optionsValueField: 'value',
             label: 'status',
             browserCache: 'force-cache',
             multiple: true
           }
         },

Also applies to: 2388-2388, 2402-2402, 2410-2410

🧹 Nitpick comments (6)
frontend/messages/fr.json (1)

2286-2286: Shortened translation may reduce clarity.

The translation was shortened from "Définitions des métriques" to just "Définitions". While this may be intentional for UI brevity (e.g., tab label within a metrics context), ensure users will understand this refers specifically to metric definitions when displayed.

frontend/messages/en.json (1)

2710-2727: Minor inconsistency: "Builtin" vs "builtin" capitalization.

There's a capitalization inconsistency between the keys:

  • Line 2721: "builtinMetricSample": "Builtin metric sample" (capitalized)
  • Line 2724: "builtinMetric": "builtin metric" (lowercase)

For consistency with the pattern at line 2721 and line 2723 ("customMetric": "Custom metric"), consider standardizing:

🔎 Suggested fix for consistency
-	"builtinMetric": "builtin metric",
+	"builtinMetric": "Builtin metric",
backend/core/models.py (4)

509-512: Library metric_definitions import looks solid; consider minor robustness tweaks

The new metric_definitions handling and update_metric_definitions() wiring into update_library() are coherent and match the existing threats/reference_controls pattern. Two small robustness nits you might consider (non‑blocking):

  • When normalizing URNs, also overwrite the value in defaults to keep the stored value canonical:

    normalized_urn = metric_definition["urn"].lower()
    metric_definition = {**metric_definition, "urn": normalized_urn}
  • If unit lookup by name fails, we silently store unit=None. If libraries are expected to always reference a valid unit, logging a warning (or at least ic during development) on lookup miss would help diagnose library content issues.

Overall behavior is correct; these are just resilience improvements.

Also applies to: 573-599, 1189-1192


1338-1338: Metric units terminology + translation handling are consistent; could be centralized

Adding FieldPath.METRIC_UNIT, DEFAULT_METRIC_UNITS, and create_default_metric_units() lines up nicely with MetricDefinition.unit.limit_choices_to and the new metric subsystem. The updated get_name_translated correctly supports both legacy string translations and the new {"fr": {"name": "..."} } structure, with a sensible fallback to self.name.capitalize().

If you find more places needing this “dict-or-string translation with nested name key” logic (e.g., other referential models), consider extracting a small helper (e.g., resolve_i18n_name(self.translations, self.name)) to avoid subtle divergence between implementations over time. Not urgent, just a DRY/readability win.

Also applies to: 1625-1682, 1755-1762, 1764-1773


4807-4811: BuiltinMetricSample hooks are correct; watch potential extra load on hot paths

Hooking BuiltinMetricSample.update_or_create_snapshot() into:

  • RiskAssessment.upsert_daily_metrics()
  • ComplianceAssessment.upsert_daily_metrics()
  • FindingsAssessment.upsert_daily_metrics() (via its new save() override)

gives you a nice, centralized way to keep the daily builtin metrics snapshots in sync with the domain models. The call sites are appropriate and reuse existing “upsert_daily_metrics” entry points.

The only caveat is performance: each save on these models (and on RiskScenario.save() / RequirementAssessment.save() / Finding.save() which call into the upsert methods) now triggers a recompute for the whole object’s metric set. For large assessments or very chatty update patterns this could add noticeable DB and CPU load.

If that ever becomes an issue, a future enhancement could be to:

  • debounce these updates via a lightweight queue (e.g., schedule a Celery task keyed by (model, id, date)), or
  • gate the snapshot update behind a cheap “have fields that affect metrics changed?” check.

For now, behavior is correct; this is just something to keep in mind if you see hotspots around these saves.

Also applies to: 5758-5762, 6898-6907


6995-7002: Finding → FindingsAssessment metrics propagation is correct; may be chatty for bulk updates

The new Finding.save() override correctly:

  • saves the finding,
  • bumps the parent FindingsAssessment.updated_at via a direct update() (avoiding recursive saves),
  • and calls self.findings_assessment.upsert_daily_metrics(), which in turn updates the BuiltinMetricSample snapshot.

Functionally this is what you want, but note that any batch-ish operations that iterate Finding.save() in a loop will now recompute the entire findings assessment metrics once per finding. If you expect bulk edits, consider (later) either:

  • adding a skip_metrics kwarg for callers that will explicitly refresh once at the end, or
  • introducing a small “dirty flag” and a periodic/task-based recomputation.

Not mandatory for this PR, but worth being aware of.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ac4c2b0 and 8802da4.

📒 Files selected for processing (13)
  • backend/ciso_assistant/settings.py (1 hunks)
  • backend/core/models.py (10 hunks)
  • backend/core/startup.py (3 hunks)
  • backend/core/urls.py (1 hunks)
  • backend/resilience/serializers.py (1 hunks)
  • enterprise/backend/enterprise_core/settings.py (1 hunks)
  • frontend/messages/en.json (7 hunks)
  • frontend/messages/fr.json (5 hunks)
  • frontend/src/lib/components/Forms/ModelForm/AssetAssessmentForm.svelte (1 hunks)
  • frontend/src/lib/utils/table.ts (3 hunks)
  • frontend/src/lib/utils/types.ts (1 hunks)
  • frontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.svelte (1 hunks)
  • frontend/src/routes/(app)/(internal)/stored-libraries/[id=uuid]/+page.svelte (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • frontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.svelte
  • backend/core/urls.py
  • enterprise/backend/enterprise_core/settings.py
  • backend/ciso_assistant/settings.py
  • frontend/src/routes/(app)/(internal)/stored-libraries/[id=uuid]/+page.svelte
  • backend/core/startup.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/resilience/serializers.py (1)
backend/core/serializer_fields.py (1)
  • FieldsRelatedField (26-96)
backend/core/models.py (2)
backend/core/views.py (8)
  • get (1113-1120)
  • get (6744-6745)
  • filter (190-216)
  • field_path (11726-11727)
  • update (775-808)
  • update (5169-5181)
  • update (5431-5444)
  • update (8952-8955)
backend/metrology/models.py (5)
  • MetricDefinition (19-92)
  • BuiltinMetricSample (314-588)
  • update_or_create_snapshot (366-390)
  • save (159-164)
  • save (299-303)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (python)
🔇 Additional comments (15)
frontend/messages/fr.json (3)

67-68: LGTM - New metrology navigation entries.

The translations for the new metrology feature navigation are appropriate. "Métriques" is a suitable French translation for the feature, and the description clearly explains the functionality.


2952-2952: Translation includes additional context not in the English key.

The English key showLegend is translated as "Afficher le titre et la légende" which includes "titre" (title). This might be intentional if the feature actually shows both title and legend together, but verify this matches the actual UI behavior to avoid confusion.


2938-3022: New metrology translation keys look good.

The translations for the new dashboard and metrics feature are well-formed and follow French grammar conventions. Key observations:

  • "Plus élevé est mieux" for higherIsBetter is clear
  • Metric-related terminology (métrique, tableau de bord, widget) is consistent
  • Widget configuration terms are properly translated
frontend/messages/en.json (8)

68-69: LGTM! Clear metrology section introduction.

The metrology section introduction is well-structured with a clear description of the feature's purpose.


266-266: LGTM! Consistent with import key patterns.

The import key follows the established naming convention and is grammatically correct.


475-475: LGTM! Improved clarity for users.

The label change from "BIA assessment" to "Asset impacts assessment" improves clarity, making it more accessible to users who may not be familiar with the BIA acronym.


1875-1876: LGTM! Clear value terminology.

The display value and current value keys are well-defined and appropriate for their use cases.


2728-2764: LGTM! Well-structured dashboard and widget keys.

The dashboard and widget keys are comprehensive, consistently named, and provide clear user-facing text for the metrology UI components.


2765-2775: LGTM! Clear metric configuration keys with helpful descriptions.

The metric configuration keys are well-defined with informative help text that guides users on how to use these settings effectively.


2913-2927: LGTM! Comprehensive ordered entry list UI keys.

The ordered entry list keys provide a complete set of translations for the UI component, including accessibility labels and debug information. The metric type keys (qualitative, quantitative, etc.) are clear and appropriate.


3019-3024: LGTM! Clear final metrology descriptions and improved cascade hint.

The metrology feature descriptions are concise and informative. The enhanced cascade affected hint provides clear guidance on the impact of deletion operations.

frontend/src/lib/utils/table.ts (2)

2351-2517: Well-structured metrology configurations.

The seven new metrology-related list view configurations follow a consistent pattern with appropriate head/body field mappings and filters. The use of shared filter constants (DOMAIN_FILTER, LABELS_FILTER) maintains consistency with existing configurations.


1777-1778: Nested asset property access is correctly configured.

The backend AssetAssessmentReadSerializer returns the asset field as a nested object with the fields ["id", "str", "ref_id"], which correctly supports the frontend's access to asset.ref_id and asset.str at lines 1777-1778.

frontend/src/lib/utils/types.ts (1)

127-135: LGTM! Metrology URL models added correctly.

The seven new metrology URL model entries are properly integrated following the existing naming convention (kebab-case) and are appropriately grouped with a comment. These entries align with the listViewFields configurations in table.ts.

backend/resilience/serializers.py (1)

44-44: No changes needed. The asset field specification follows the established pattern used throughout the serializer for related fields (e.g., asset_class, bia). The fields ["id", "str", "ref_id"] provide sufficient asset identification and align with the codebase's API design approach of explicitly controlling nested object representation.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
backend/metrology/migrations/0001_initial.py (2)

347-354: Consider adding related_name to the metric_definition ForeignKey.

The metric_definition FK lacks an explicit related_name. Django will auto-generate metricinstance_set, but adding an explicit name like related_name="instances" would improve code clarity when querying metric instances from a definition:

# With explicit related_name="instances":
definition.instances.all()  # clearer intent
# vs auto-generated:
definition.metricinstance_set.all()

592-656: Consider adding a compound index on (metric_instance, timestamp) for query performance.

Time-series queries will typically filter by metric_instance and order by timestamp. While timestamp has db_index=True, a compound index would significantly improve performance for the common query pattern:

CustomMetricSample.objects.filter(metric_instance=instance).order_by('-timestamp')
🔎 Proposed index addition
options={
    "verbose_name": "Custom metric sample",
    "verbose_name_plural": "Custom metric samples",
    "ordering": ["-timestamp"],
    "indexes": [
        models.Index(
            fields=["metric_instance", "timestamp"],
            name="metrology_c_metric_ts_idx",
        )
    ],
},
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8802da4 and 94d07c8.

📒 Files selected for processing (2)
  • backend/core/migrations/0120_alter_terminology_field_path.py (1 hunks)
  • backend/metrology/migrations/0001_initial.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/metrology/migrations/0001_initial.py (2)
backend/core/migrations/0120_alter_terminology_field_path.py (1)
  • Migration (6-28)
backend/iam/models.py (2)
  • Folder (80-328)
  • get_root_folder_id (92-97)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (38)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/settings/general.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/findings-assessments.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/tprm.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/compliance-assessments.tes...
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/common.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/ebios-rm.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/settings/sso.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/startup.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/business-impact-analysis.t...
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/my-assignments.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/startup.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/incidents.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/login.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/libraries.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/analytics.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/mappings.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/common.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/incidents.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/login.test.ts)
  • GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/user-route.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/my-assignments.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/settings/general.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/analytics.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/settings/sso.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/business-impact-analysis.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/user-route.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/libraries.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/findings-assessments.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/mappings.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/ebios-rm.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/compliance-assessments.test.ts)
  • GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/tprm.test.ts)
  • GitHub Check: test (3.12)
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: build (3.12)
🔇 Additional comments (6)
backend/core/migrations/0120_alter_terminology_field_path.py (2)

6-9: LGTM!

The migration dependency chain is correctly specified.


11-27: Verify that no existing Terminology records have field_path values outside the new choices.

This migration updates the field_path field choices to add metric_definition.unit. The new choices now include: ro_to.risk_origin, qualifications, accreditation.status, accreditation.category, entity.relationship, and metric_definition.unit.

While this migration does not change the database at all — it exists only for Django's internal state tracking, confirm that all existing Terminology records have field_path values present in this new choices list. Existing records with values outside this list could cause validation errors when accessed via forms or the ORM, even though the migration itself is safe.

backend/metrology/migrations/0001_initial.py (4)

10-18: LGTM - Migration dependencies are correctly configured.

The dependencies properly reference the required migrations from contenttypes, core (including the Terminology field_path change for metric_definition.unit), iam, and the swappable AUTH_USER_MODEL.


657-732: LGTM - BuiltinMetricSample is well designed.

Good design choices:

  • Compound index on (content_type, object_id, date) optimizes the common lookup pattern
  • UniqueConstraint prevents duplicate samples per object per day
  • Separate indexes on object_id and date support additional query patterns
  • No folder FK is appropriate since these are system-generated samples

134-137: Reconsider using null=True on the name CharField.

Django convention is to use an empty string, not NULL, as the "no data" state for string-based fields. The name field allowing null=True is inconsistent with this pattern and creates two possible "no data" states. The only exception is when CharField has both unique=True and blank=True set, which requires null=True to avoid unique constraint violations. Verify whether this exception applies to your name field; if not, remove null=True and rely on empty strings instead.


562-584: Widget data source validation is already enforced.

The DashboardWidgetWriteSerializer.validate() method already ensures that non-text widgets must have either metric_instance or target_content_type (but not both, and not neither). It raises a ValidationError with "Please select either a custom metric instance or a builtin metric..." Text widgets are correctly allowed to skip metric configuration. The suggested validation is already implemented at the application level.

Likely an incorrect or invalid review comment.

Comment thread backend/metrology/migrations/0001_initial.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
backend/library/core_cybersecurity_metrics.yaml (1)

15-294: Excellent metric definitions with comprehensive coverage.

The 25 metric definitions are well-structured and cover essential cybersecurity domains:

  • Quantitative metrics (19): All have appropriate units (percentage, count, days, hours) and sensible higher_is_better flags
  • Qualitative metrics (6): Follow consistent 5-level maturity model (Initial → Developing → Defined → Managed → Optimized)
  • Naming conventions: URNs and ref_ids follow consistent patterns
  • Targets: Default targets are aspirational (e.g., 0.0 for incidents, 100% for MFA) but align with industry best practices

The aggressive default targets (0.0 incidents/vulnerabilities, 100% coverage) are defensible as goals and can be customized per instance through the metrology framework.

💡 Optional: Consider documenting target customization guidance

While the default targets represent best-practice goals, organizations may want guidance on customizing them based on:

  • Organizational maturity and risk appetite
  • Industry benchmarks and peer comparisons
  • Resource constraints and compliance requirements

Consider adding a README or inline comments explaining that these are aspirational defaults.

frontend/messages/fr.json (1)

2286-2286: Verify shortened translation maintains clarity in UI context

The translation for metricsDefinitions was shortened from "Définitions des métriques" to just "Définitions". While this may be intentional for UI space constraints, the loss of the qualifier "des métriques" could reduce clarity if this label appears without sufficient surrounding context.

Please verify that this abbreviated form remains clear and unambiguous in all UI contexts where it's displayed (menus, buttons, page titles, etc.).

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c38dbe7 and 0f688a1.

⛔ Files ignored due to path filters (1)
  • tools/metrics/core_cybersecurity_metrics.xlsx is excluded by !**/*.xlsx
📒 Files selected for processing (3)
  • backend/library/core_cybersecurity_metrics.yaml (1 hunks)
  • frontend/messages/fr.json (5 hunks)
  • frontend/src/routes/(app)/(internal)/libraries/+page.svelte (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: build (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: build_community_frontend
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: Analyze (python)
🔇 Additional comments (4)
backend/library/core_cybersecurity_metrics.yaml (1)

1-14: LGTM! Library metadata is well-structured.

The library metadata follows the expected format with all required fields present (URN, locale, version, dates, etc.). The publication date of December 13, 2025 is reasonable for a library prepared in advance of release.

frontend/messages/fr.json (3)

67-68: LGTM - New metrology translations

The new metrology feature translations are clear and well-written. The description effectively conveys the purpose of tracking metrics and KPIs on customizable dashboards.

Also applies to: 258-258


1793-1793: LGTM - Improved translation clarity

The updated translation from "Focus" to "Analyse d'impact de l'actif" is more descriptive and accurately conveys the purpose of the asset assessment feature in the BIA context.


2938-3022: LGTM - Comprehensive metrology feature translations

The addition of 84 new translation keys for the metrology feature is well-structured and consistent. The translations cover:

  • Metric definitions and instances
  • Dashboard and widget configuration
  • Sample collection and tracking
  • UI labels and help text

The French translations are grammatically correct and follow the established patterns in the file.

Recommendation: Verify completeness by checking that:

  1. All UI strings in the metrology feature code reference these translation keys
  2. No English fallback strings are hardcoded in the implementation
  3. All user-facing text for this feature is properly internationalized

Comment thread frontend/src/routes/(app)/(internal)/libraries/+page.svelte
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
frontend/messages/en.json (1)

1874-1876: Remove unused translation key displayValue.

The key "displayValue": "Value" (line 1875) does not appear to be referenced anywhere in the codebase as a translation key. In contrast, "currentValue": "Current Value" (line 1876) is actively used in the metric-instances page. The displayValue string variable names found in Svelte components are unrelated component-level variables, not references to this i18n key.

Consider removing this unused translation key to reduce maintenance burden.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a8f0127 and e82a631.

📒 Files selected for processing (1)
  • frontend/messages/en.json (6 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: build (3.12)
  • GitHub Check: Analyze (python)
🔇 Additional comments (4)
frontend/messages/en.json (4)

68-69: LGTM: Core metrology keys are well-defined.

The metrology feature keys follow the established naming conventions and provide clear, user-friendly descriptions. The description effectively communicates the purpose (tracking KPIs/KRIs and dashboard visualization).

Also applies to: 266-266


2710-2775: LGTM: Comprehensive metric and dashboard localization keys.

The extensive set of keys for the metrology feature demonstrates:

  • Consistent naming patterns (singular/plural pairs, "add" prefixes)
  • Clear help text for complex concepts (e.g., higherIsBetterHelpText)
  • Logical grouping (metrics, dashboards, widgets, configuration)
  • User-friendly terminology (e.g., "editLayout" vs "editAttributes" for different editing modes)

The implementation covers the full feature surface as described in the PR objectives.


2923-2927: LGTM: Type definitions and feature descriptions are clear and consistent.

The remaining metrology keys demonstrate:

  • Clear type distinctions (qualitative vs quantitative metrics)
  • Proper use of message format for pluralization (line 3019)
  • Consistent description style matching other features
  • Generic utility keys (e.g., "count") that promote reusability

The localization additions comprehensively support the new metrology feature set.

Also applies to: 3019-3023


2913-2922: Debug key is intentional and properly scoped.

The orderedEntryListDebugTitle key at line 2919 is used in the OrderedEntryList component and is controlled by a debug prop that defaults to false. The debug output only appears when explicitly enabled, so this is not user-facing in normal operation. No changes needed.

@ab-smith ab-smith merged commit 5801ca1 into main Dec 20, 2025
120 of 122 checks passed
@ab-smith ab-smith deleted the custom_metrics branch December 20, 2025 07:16
@github-actions github-actions Bot locked and limited conversation to collaborators Dec 20, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

db-migration This branch contains migration files. Caution should be exercised when merging to avoid conflicts. High Value

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant