feat(epic): custom KPI and KRI#2932
Conversation
WalkthroughAdds a new metrology app: DB models, REST endpoints, import/conversion support, management commands, periodic tasks, startup provisioning, frontend UI (forms/pages/components), translations, and permissions; integrates metric snapshots into existing assessment save paths and library import flows. Changes
Sequence Diagram(s)sequenceDiagram
participant YAML as Library YAML
participant Importer as LibraryImporter
participant MetricImp as MetricDefinitionImporter
participant Terminology as Terminology
participant DB as Database
YAML->>Importer: provide metric_definitions
Importer->>MetricImp: init_metric_definitions(list)
loop each metric
MetricImp->>Terminology: resolve unit by name (METRIC_UNIT)
Terminology-->>MetricImp: unit object or None
MetricImp->>DB: create/update MetricDefinition record
DB-->>MetricImp: saved
end
MetricImp-->>Importer: import complete
sequenceDiagram
participant Assessment as Assessment instance (save)
participant Upserter as upsert_daily_metrics()
participant BMS as BuiltinMetricSample
participant Compute as compute_metrics()
participant DB as Database
Assessment->>Upserter: save triggers upsert_daily_metrics()
Upserter->>BMS: update_or_create_snapshot(assessment)
BMS->>Compute: compute_metrics(assessment)
Compute->>Compute: run type-specific _compute_* methods
Compute-->>BMS: metrics dict
BMS->>DB: get_or_create by (content_type, object_id, date)
DB-->>BMS: sample created/updated
BMS-->>Assessment: snapshot persisted
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
️✅ There are no secrets present in this pull request anymore.If these secrets were true positive and are still valid, we highly recommend you to revoke them. 🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request. |
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
frontend/src/lib/components/Chart/MetricSampleChart.svelte (1)
21-21: Hardcodedchart_idstill causes collisions with multiple chart instances.This issue was flagged in a previous review but remains unresolved. When multiple
MetricSampleChartcomponents render on the same page, they target the same DOM element ID, causing rendering conflicts.🔎 Proposed fix using crypto.randomUUID()
- const chart_id = 'metric-sample-chart'; + const chart_id = `metric-sample-chart-${crypto.randomUUID()}`;
🧹 Nitpick comments (1)
frontend/src/lib/components/Chart/MetricSampleChart.svelte (1)
5-11: Consider stronger typing for Props interface.The
anytypes forsamplesandmetricDefinitionreduce type safety. More specific interfaces would improve maintainability and catch errors at compile time.💡 Example TypeScript interfaces
interface MetricSample { timestamp: string; value: string | { choice_index?: number; result?: number }; } interface ChoiceDefinition { name: string; } interface Unit { name: string; } interface MetricDefinition { category: 'qualitative' | 'quantitative'; unit?: Unit; choices_definition?: ChoiceDefinition[]; } interface Props { samples?: MetricSample[]; metricDefinition?: MetricDefinition; width?: string; height?: string; classesContainer?: string; }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
frontend/src/lib/components/Chart/MetricSampleChart.svelte(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: test (3.12)
- GitHub Check: build_enterprise_frontend
- GitHub Check: build_community_frontend
- GitHub Check: build (3.12)
🔇 Additional comments (3)
frontend/src/lib/components/Chart/MetricSampleChart.svelte (3)
22-48: LGTM: Data processing logic is sound.The component correctly handles both qualitative and quantitative metrics, safely parses JSON values, filters invalid data, and sorts chronologically. Error handling for malformed data is appropriate.
140-146: ✅ Memory leak fix verified.The resize listener now correctly uses the same function reference (
handleResize) for bothaddEventListenerandremoveEventListener. Chart disposal is also properly handled. This addresses the issue flagged in the previous review.
69-70: No issues found - indexing is correctly 1-based throughout.Backend code explicitly states "Convert 1-based index to 0-based for array access" (backend/metrology/models.py:276), confirming choice_index values are intentionally 1-based. Frontend code correctly assumes this: lines 69-70 and 98 use
value - 1to access the choice names array, and the y-axis is configured with min: 1 (lines 93-94). The indexing is consistent and correct across the entire codebase. No changes needed.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (6)
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte (2)
94-96: Consider validating chart_type before use.While the fallback handles undefined
chartType, consider adding validation wheregetMinDimensionsis called to ensurewidget.chart_typeis defined. This would make debugging easier if widgets arrive without a chart type.
124-168: Optional: Extract common modal logic.The three modal functions share similar structure. Consider extracting a helper function to reduce duplication:
🔎 Suggested refactor
function openAddWidgetModal(type: 'custom' | 'builtin' | 'text') { const configs = { custom: { form: widgetCreateForm, model: widgetModel, formAction: '/dashboard-widgets?/create', title: 'addCustomWidget' }, builtin: { form: builtinWidgetCreateForm, model: builtinWidgetModel, formAction: '/dashboard-builtin-widgets?/create', title: 'addBuiltinWidget', supportedModels }, text: { form: textWidgetCreateForm, model: textWidgetModel, formAction: '/dashboard-widgets?/create', title: 'addTextWidget' } }; const config = configs[type]; modalStore.trigger({ type: 'component', component: { ref: CreateModal, props: { form: config.form, model: config.model, formAction: config.formAction, ...(config.supportedModels && { supportedModels: config.supportedModels }) } }, title: safeTranslate(config.title) }); }frontend/messages/en.json (2)
2708-2773: Minor wording fixes for metric instances and layout tabTwo small UX-facing text issues worth tightening up:
- Line 2713:
"metricInstances": "Metrics Instances"is grammatically awkward; “Metric instances” reads more naturally.- Line 2759:
"editLayout": "Widgets"is slightly confusing given the key name; if this is the tab where layout is edited, “Layout” would be clearer and also align better with the French translation.Proposed wording tweaks
- "metricInstance": "Metric Instance", - "metricInstances": "Metrics Instances", + "metricInstance": "Metric instance", + "metricInstances": "Metric instances", - "editLayout": "Widgets", + "editLayout": "Layout",
2911-2925: Ordered-entry list labels are fine; optional casing polishThe ordered-entry list keys are clear and self‑consistent. If you want to fully align with the rest of the UI’s sentence‑case buttons, you could optionally change “Add New Entry” to “Add new entry”, but it’s not required.
Optional stylistic tweak
- "orderedEntryListAddNewEntry": "Add New Entry", + "orderedEntryListAddNewEntry": "Add new entry",frontend/messages/fr.json (2)
2283-2283:metricsDefinitionslabel could be made more explicit (optional)
"metricsDefinitions": "Définitions"is understandable but a bit generic; if this appears out of context, “Définitions de métriques” would be clearer and closer to the English “Metrics Definitions”. Not mandatory, but worth considering.Optional clarification
- "metricsDefinitions": "Définitions", + "metricsDefinitions": "Définitions de métriques",
2931-3018: Align layout tab label and slightly clarify metric definitions (FR block)Overall this block is well translated and matches the new metrics/dashboard features. A couple of small improvements:
Tab label alignment with EN:
- EN suggestion was to use
"editLayout": "Layout"for the layout-editing tab.- FR currently has
"editLayout": "Modifier la disposition", which reads more like an action than a short tab label, and doesn’t mirror the noun form.- Consider shortening to just “Disposition” so both languages use a concise section/tab title.
Metric definition labels (optional):
"metricDefinitions": "Définitions"and"metricsDefinitions": "Définitions"are both quite generic.- If these labels surface in places where context isn’t obvious, “Définitions de métriques” (for one or both) would better match the English and disambiguate.
Proposed French tweaks
- "editLayout": "Modifier la disposition", + "editLayout": "Disposition", - "metricDefinitions": "Définitions", + "metricDefinitions": "Définitions de métriques",(Optionally make the same change for
"metricsDefinitions"if you want both to read explicitly “Définitions de métriques”.)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
frontend/messages/en.json(6 hunks)frontend/messages/fr.json(4 hunks)frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Frontend Linters
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte
[warning] 1-1: Code style issues found in the above file. Run 'pnpm exec prettier --write' to fix.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: build_enterprise_frontend
- GitHub Check: build_community_frontend
- GitHub Check: build (3.12)
- GitHub Check: test (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: enterprise-startup-docker-compose-test
🔇 Additional comments (6)
frontend/messages/en.json (4)
68-69: Metrology section label and description look consistentLabel and description are clear and fit the new metrics/KPI feature area; no issues spotted.
266-266: Import metric definitions action is well named“Import metric definitions” matches existing import* patterns and is self‑explanatory.
1873-1874:displayValue/currentValuenaming is coherentThese labels are consistent with the existing
valuekey and with how metric values are typically shown; nothing to adjust here.
3014-3018: Cascade hint and metric catalog descriptions are clear
cascadeAffectedHint,count, and the three *Description keys read well and explain the behavior/sections clearly; no changes needed.frontend/messages/fr.json (2)
67-68: Metrology section translations are appropriate“Métriques” and its description accurately capture the new metrics/KPI area; terminology fits the rest of the app.
258-258: Import metric definitions translation is accurate“Importer des définitions de métriques” matches the English intent and existing import labels.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
frontend/src/lib/utils/table.ts (1)
2335-2413: Endpoint prefix inconsistency remains unresolved (duplicate issue).This is a duplicate of a previous review comment. Several metrology filter endpoints are still missing the required
metrology/prefix:
- Line 2352:
'metric-definitions/category'→ should be'metrology/metric-definitions/category'- Line 2372:
'metric-definitions/provider'→ should be'metrology/metric-definitions/provider'- Line 2386:
'metric-definitions'→ should be'metrology/metric-definitions'- Line 2394:
'metric-instances/status'→ should be'metrology/metric-instances/status'Note that dashboard-related endpoints (lines 2446, 2454, 2462, 2480, 2495) already correctly use the
metrology/prefix, showing this can be implemented consistently.
🧹 Nitpick comments (1)
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts (1)
86-98: Consider parallelizing select option fetches for faster page loads.The current implementation fetches select options sequentially, which works but is slower than necessary. For dashboards with multiple select fields, this adds cumulative latency.
🔎 Refactor to use Promise.all for parallel fetching
- for (const selectField of selectFields) { - if (selectField.detail) continue; - const url = `${BASE_API_URL}/${widgetModel.endpointUrl}/${selectField.field}/`; - const response = await event.fetch(url); - if (response.ok) { - selectOptions[selectField.field] = await response.json().then((data: Record<string, any>) => - Object.entries(data).map(([key, value]) => ({ - label: value, - value: selectField.valueType === 'number' ? parseInt(key) : key - })) - ); - } - } + await Promise.all( + selectFields + .filter((selectField) => !selectField.detail) + .map(async (selectField) => { + const url = `${BASE_API_URL}/${widgetModel.endpointUrl}/${selectField.field}/`; + const response = await event.fetch(url); + if (response.ok) { + selectOptions[selectField.field] = await response.json().then((data: Record<string, any>) => + Object.entries(data).map(([key, value]) => ({ + label: value, + value: selectField.valueType === 'number' ? parseInt(key) : key + })) + ); + } + }) + );
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
frontend/messages/fr.json(4 hunks)frontend/src/lib/utils/table.ts(1 hunks)frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts(1 hunks)frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-11-17T11:09:18.488Z
Learnt from: monsieurswag
Repo: intuitem/ciso-assistant-community PR: 2860
File: frontend/src/routes/(app)/(internal)/[model=urlmodel]/[id=uuid]/+page.server.ts:0-0
Timestamp: 2025-11-17T11:09:18.488Z
Learning: In frontend/src/routes/(app)/(internal)/[model=urlmodel]/[id=uuid]/+page.server.ts, the select action performs batch PATCH operations in Promise.all without transaction semantics. Partial failures (e.g., one object fails to update due to race conditions) are acceptable and should not trigger a fail() return, as most objects will still be updated successfully. Console warnings are sufficient for logging failures.
Applied to files:
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.tsfrontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte
📚 Learning: 2025-09-18T10:42:17.597Z
Learnt from: ab-smith
Repo: intuitem/ciso-assistant-community PR: 2548
File: frontend/src/lib/components/ModelTable/LibraryOverview.svelte:1-22
Timestamp: 2025-09-18T10:42:17.597Z
Learning: The frontend of the ciso-assistant-community project is based on Svelte 5, so Svelte 5 runes like $props(), $derived, $state, etc. are expected and should not be flagged as issues.
Applied to files:
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte
🧬 Code graph analysis (1)
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts (5)
frontend/src/lib/utils/load.ts (1)
loadDetail(68-236)frontend/src/lib/utils/crud.ts (2)
getModelInfo(2451-2457)urlParamModelSelectFields(2478-2480)frontend/src/lib/utils/constants.ts (1)
BASE_API_URL(4-8)frontend/src/lib/utils/schemas.ts (1)
modelSchema(1481-1483)frontend/src/lib/utils/actions.ts (2)
nestedDeleteFormAction(251-256)nestedWriteFormAction(176-196)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: build (3.12)
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: build_community_frontend
- GitHub Check: build_enterprise_frontend
- GitHub Check: test (3.12)
- GitHub Check: Analyze (python)
🔇 Additional comments (9)
frontend/messages/fr.json (1)
67-68: Well-implemented French translations for metrology features.The French translations for the new metrology subsystem are professionally done and consistent with existing patterns:
- Core navigation and descriptions properly translated
- Technical terminology appropriately handled (e.g., "widget", "dashboard" used as common French tech terms)
- Grammar and sentence structure are correct throughout
- The shortening of "metricsDefinitions" from "Définitions des métriques" to "Définitions" (line 2283) is reasonable for UI space constraints in navigation
The comprehensive set of ~90 new translation keys properly covers all metrology features including metrics, dashboards, widgets, samples, and related UI elements.
Also applies to: 258-258, 2283-2283, 2931-3018
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.svelte (5)
1-117: LGTM: Setup and initialization are well-structured.The component initialization follows Svelte 5 best practices with appropriate use of runes. Grid constants are clearly defined, and the modal subscription pattern correctly cleans up on unmount.
188-257: LGTM: Drag-and-drop logic correctly handles positioning and bounds.The drag offset calculation prevents visual jumps, and position updates are properly constrained to grid boundaries with appropriate bounds checking.
259-273: LGTM: Resize logic correctly enforces dimension constraints.The function properly enforces both minimum dimensions (per chart type) and maximum dimensions (grid boundaries), preventing widgets from exceeding grid bounds.
276-321: LGTM: Grid calculations and utilities are correctly implemented.The derived values and helper functions properly manage grid state and generate appropriate CSS positioning. The row management logic allows for flexible canvas sizing.
324-586: Template rendering is well-structured with good UX patterns.The layout editor UI provides clear visual feedback with grid backgrounds, hover states, and intuitive controls. Empty state handling guides users to add their first widget.
Note: Accessibility improvements for drag-and-drop and resize operations, memory leak prevention, and formatting fixes are already covered in previous review comments.
frontend/src/routes/(app)/(internal)/dashboards/[id=uuid]/layout/+page.server.ts (3)
11-45: LGTM: Dashboard and widget form initialization is correct.The shallow copy pattern properly prevents mutation of shared model cache, and the firstFreeRow calculation correctly identifies the next available position for new widgets.
46-81: LGTM: Widget form preparation follows consistent patterns.Both text and builtin widget forms use the same shallow copy approach and provide appropriate default values for their respective widget types.
133-175: LGTM: Layout save action correctly handles batch widget updates.The sequential PATCH approach with error collection aligns with the codebase pattern of accepting partial failures in batch operations. Error handling appropriately captures individual failures while allowing other updates to proceed.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
frontend/messages/fr.json (1)
2931-3017: Recommend native French speaker review for new metrology translations.This PR adds approximately 86 new translation keys for the metrology feature (metrics, dashboards, widgets, KPIs). The technical aspects are solid:
- ✓ Valid JSON syntax
- ✓ Consistent key naming conventions
- ✓ All keys have French translations (no untranslated English)
- ✓ No duplicate keys
However, given the volume and complexity of the new translations, I recommend having a native French speaker review these for:
- Translation accuracy and naturalness
- Consistency with existing terminology throughout the application
- Technical term choices (e.g., "widget", "dashboard" usage in French contexts)
- Help text clarity and usability
This is especially important for user-facing features like dashboards and metrics where clear communication is critical.
backend/metrology/models.py (2)
289-293: Consider moving unit display mappings to configuration.Lines 289-292 contain hardcoded unit name transformations (
percentage→%,request_per_second→RPS). This creates a maintenance burden if new units with special display formats are added.Consider storing display format in the Terminology model (e.g., add a
display_formatfield) or maintaining a centralized mapping dictionary at the module level.🔎 Alternative approaches
Option 1: Add display_format to Terminology model
# In Terminology model display_format = models.CharField(max_length=50, blank=True, help_text="Display format for UI (e.g., '%' for percentage)")Option 2: Module-level mapping
# At module top UNIT_DISPLAY_MAP = { "percentage": "%", "request_per_second": "RPS", # Add more as needed } # In display_value() unit = UNIT_DISPLAY_MAP.get(metric_definition.unit.name, metric_definition.unit.name)
403-414: String-based model dispatch is fragile; consider isinstance() checks.Line 403 uses
obj.__class__.__name__for string comparison to dispatch to different compute methods. This approach is fragile because:
- Model renames break silently (no errors, just returns empty dict)
- Typos in string comparisons are not caught by static analysis
- Less discoverable for developers
If circular imports allow, use
isinstance()checks instead:🔎 Proposed refactor
@classmethod def compute_metrics(cls, obj): """ Compute metrics for the given object based on its type. Args: obj: The model instance Returns: Dictionary of computed metrics """ - model_name = obj.__class__.__name__ - - if model_name == "ComplianceAssessment": + # Import at function level to avoid circular imports if needed + from core.models import ComplianceAssessment, RiskAssessment, FindingsAssessment + from iam.models import Folder + + if isinstance(obj, ComplianceAssessment): return cls._compute_compliance_assessment_metrics(obj) - elif model_name == "RiskAssessment": + elif isinstance(obj, RiskAssessment): return cls._compute_risk_assessment_metrics(obj) - elif model_name == "FindingsAssessment": + elif isinstance(obj, FindingsAssessment): return cls._compute_findings_assessment_metrics(obj) - elif model_name == "Folder": + elif isinstance(obj, Folder): return cls._compute_folder_metrics(obj) else: return {}If circular imports are a concern, consider a registry pattern or keeping the string-based approach but adding a comment explaining the tradeoff.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/metrology/models.py(1 hunks)frontend/messages/fr.json(4 hunks)frontend/src/lib/components/Chart/MetricSampleChart.svelte(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/src/lib/components/Chart/MetricSampleChart.svelte
🧰 Additional context used
🧬 Code graph analysis (1)
backend/metrology/models.py (7)
backend/iam/models.py (2)
FolderMixin(331-350)PublishInRootFolderMixin(353-368)backend/core/base_models.py (2)
AbstractBaseModel(8-116)NameDescriptionMixin(119-132)backend/core/models.py (41)
FilteringLabelMixin(192-198)I18nObjectMixin(163-170)Terminology(1260-1713)Category(4421-4436)Category(6744-6748)FieldPath(1265-1271)Meta(114-115)Meta(169-170)Meta(197-198)Meta(202-204)Status(2262-2268)Status(3660-3666)Status(3807-3812)Status(3976-3982)Status(4478-4483)Status(4564-4571)Status(4640-4645)Status(5573-5578)Status(6449-6453)Status(6843-6853)Status(7297-7304)save(2002-2010)save(2126-2129)save(3131-3133)save(3692-3694)save(3775-3782)save(3957-3961)save(4171-4195)save(4556-4558)save(4693-4696)save(4769-4811)save(5538-5568)save(5696-5702)save(6640-6660)save(6837-6839)save(6928-6934)save(7213-7226)delete(1244-1257)delete(5529-5536)metrics(5603-5617)risk(1889-1891)backend/metrology/views.py (6)
category(40-41)status(74-75)collection_frequency(79-80)chart_type(126-127)time_range(131-132)aggregation(136-137)backend/metrology/serializers.py (12)
Meta(19-21)Meta(31-33)Meta(38-40)Meta(66-68)Meta(86-88)Meta(106-108)Meta(117-119)Meta(128-130)Meta(142-151)Meta(354-356)Meta(386-388)Meta(398-411)backend/global_settings/serializers.py (1)
delete(31-34)backend/global_settings/views.py (1)
delete(29-33)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: build_community_frontend
- GitHub Check: build_enterprise_frontend
- GitHub Check: build (3.12)
- GitHub Check: test (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: Analyze (python)
🔇 Additional comments (5)
frontend/messages/fr.json (2)
67-68: LGTM - New metrology section translations.The new metrology section keys are well-translated and consistent with the existing file structure.
2283-2283: Verify shortened translation appears in clear context.The translation for "metricsDefinitions" has been shortened from "Définitions des métriques" to "Définitions". While this saves space, ensure this key is used in UI contexts where it's clear we're referring to metric definitions (e.g., under a "Metrics" or "Metrology" section heading).
If this appears in isolation or in ambiguous contexts, consider using the full form or adding context through the surrounding UI elements.
backend/metrology/models.py (3)
822-906: Excellent validation logic for complex widget configuration.The
clean()method thoroughly validates the three widget modes (text, custom metric, builtin metric) with clear error messages and proper mutual exclusivity checks. The grid position validation also prevents layout issues. Well done!
908-920: Robust implementation with sensible defaults.The function properly handles missing settings, invalid values, and provides a reasonable default retention period. The
max(1, int(retention))guard ensures the value is always positive.
78-78: No action required—uniqueness validation is properly enforced.MetricDefinition (line 78), MetricInstance (line 149), and Dashboard (line 602) inherit from
AbstractBaseModel, which overridessave()to callclean()before persisting to the database. Theclean()method validates uniqueness viais_unique_in_scope()using thefields_to_checkattribute. This ensures uniqueness constraints are automatically enforced on every save, whether called directly via ORM or through forms and serializers.[also_applies_to: 149-149, 602-602]
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
frontend/messages/en.json (1)
3017-3017: Consider punctuation consistency for hint texts.The
cascadeAffectedHinttext ends with a period. While this isn't incorrect, hint texts throughout the file show inconsistent punctuation—some end with periods, others don't. For better consistency, consider establishing a convention (e.g., always include periods for complete sentences, omit for fragments).
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
enterprise/frontend/src/lib/components/SideBar/navData.ts(1 hunks)frontend/messages/en.json(6 hunks)frontend/messages/fr.json(4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: startup-docker-compose-test
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: test (3.12)
- GitHub Check: build (3.12)
- GitHub Check: build_community_frontend
- GitHub Check: build_enterprise_frontend
- GitHub Check: Analyze (python)
🔇 Additional comments (6)
frontend/messages/en.json (5)
68-69: LGTM: Metrology context keys added correctly.The new metrology keys follow the established pattern for top-level navigation items and provide a clear description of the feature's purpose.
266-266: LGTM: Import key follows established pattern.The
importMetricDefinitionskey is consistent with other import-related translation keys in this section.
876-877: LGTM: Generic value display keys.The
displayValueandcurrentValuekeys are appropriately placed and provide clear labels for UI elements.
2711-2928: LGTM: Comprehensive metrology feature translation keys.This large block adds approximately 218 new translation keys for the metrology feature, including:
- Metric definitions, instances, and samples (custom and builtin)
- Dashboard and widget management
- Chart types and configurations
- Ordered entry list UI components
- Metric attributes and help text
The keys follow consistent naming conventions (camelCase) and include appropriate singular/plural pairs. The English translations are clear and descriptive.
3018-3021: LGTM: Metric-related description keys.The
count,metricDefinitionsDescription,metricInstancesDescription, anddashboardsDescriptionkeys follow the established pattern for description texts and provide clear explanations of each feature.frontend/messages/fr.json (1)
67-68: Verify technical terminology with French-speaking users before final approval.The French translations for metrology/metrics/dashboard features are generally well-executed and consistent:
✅ Verified Strengths:
- Consistent terminology throughout ("tableau de bord" for dashboard, "métrique" for metric)
- Clear, instructive help texts (e.g., line 2961's dashboard editor instructions: drag widgets, use resize handles, save changes)
- Proper French grammar and capitalization
- Maintains consistency with existing translation patterns
⚠️ Recommendations:
Technical term choices: "widget" is consistently kept in English throughout (lines 2950–2983). While common in French tech contexts, confirm this aligns with your audience's expectations. Consider whether "composant" or "élément" would be more appropriate.
Domain expert review: With 86 new translation keys added (lines 2934–3019), have a French-speaking user with metrics/KPI domain knowledge validate terms like:
- "higherIsBetter" → "Plus élevé est mieux" (line 2935)
- "choicesDefinition" → "Définition des choix" (line 2937)
- "aggregation" → "Agrégation" (line 2943)
Context clarity: Ensure generic translations like "displayValue": "Valeur" (line 2987) are appropriately clear within actual UI context.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
backend/conftest.py (1)
1-70: LGTM! Well-structured shared test fixture.The
RISK_MATRIX_JSON_DEFINITIONconstant provides a valid, reusable risk matrix structure with consistent Low/Medium/High definitions across probability, impact, and risk dimensions. The 3x3 grid properly maps probability-impact combinations to risk levels.Optional: Add docstring for clarity
Consider adding a more detailed docstring to explain the structure:
# Shared test fixtures and constants -# Valid minimal risk matrix json_definition for tests +""" +Valid minimal risk matrix json_definition for tests. +Defines a 3x3 risk matrix with: +- Probability levels: Low (0), Medium (1), High (2) +- Impact levels: Low (0), Medium (1), High (2) +- Risk levels: Low (0), Medium (1), High (2) +- Grid: Maps (probability, impact) pairs to risk level indices +""" RISK_MATRIX_JSON_DEFINITION = {
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/app_tests/api/test_api_risk_acceptances.py(3 hunks)backend/app_tests/api/test_api_risk_assessments.py(4 hunks)backend/app_tests/api/test_api_risk_scenarios.py(4 hunks)backend/app_tests/conftest.py(1 hunks)backend/conftest.py(1 hunks)backend/core/tests/test_models.py(9 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
backend/core/tests/test_models.py (1)
backend/core/models.py (2)
filename(3708-3713)filename(3784-3785)
backend/app_tests/api/test_api_risk_scenarios.py (2)
frontend/src/lib/utils/types.ts (1)
RiskMatrix(168-174)backend/core/models.py (1)
RiskMatrix(1827-1916)
backend/app_tests/api/test_api_risk_acceptances.py (3)
frontend/src/lib/utils/types.ts (1)
RiskMatrix(168-174)backend/core/models.py (1)
RiskMatrix(1827-1916)backend/app_tests/conftest.py (1)
test(54-67)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: build_enterprise_frontend
- GitHub Check: test (3.12)
- GitHub Check: build_community_frontend
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: build (3.12)
- GitHub Check: Analyze (python)
🔇 Additional comments (6)
backend/app_tests/conftest.py (1)
9-10: LGTM! Clean re-export of shared test fixture.The re-export pattern allows app tests to consistently import from
conftestwithout needing to reference the backend module directly. Thenoqa: F401comment appropriately suppresses the unused import warning.backend/app_tests/api/test_api_risk_scenarios.py (1)
15-15: LGTM! Consistent use of shared risk matrix test fixture.The refactoring replaces inline RiskMatrix creation with the shared
RISK_MATRIX_JSON_DEFINITIONconstant, improving test maintainability and consistency across the test suite.Also applies to: 139-143, 176-180, 205-209
backend/app_tests/api/test_api_risk_acceptances.py (1)
14-14: LGTM! Consistent application of shared test fixture.The RiskMatrix creation now uses the shared
RISK_MATRIX_JSON_DEFINITION, aligning with the refactoring pattern applied across other risk-related test files.Also applies to: 144-148, 200-204
backend/core/tests/test_models.py (2)
6-7: LGTM! Comprehensive adoption of shared test fixture.The import and usage of
RISK_MATRIX_JSON_DEFINITIONacross 7 test locations ensures consistent test data for risk matrix definitions throughout the model tests.Also applies to: 114-114, 141-141, 246-246, 266-266, 291-291, 319-319, 347-347
72-75: LGTM! Correct fix for Django's file collision handling.The assertion change properly accounts for Django's behavior of adding unique suffixes to filenames when avoiding collisions. The prefix/suffix check is more robust than strict equality, and the explanatory comment clarifies the intent.
backend/app_tests/api/test_api_risk_assessments.py (1)
7-7: LGTM! Consistent refactoring of unauthenticated test cases.The RiskMatrix creation in unauthenticated tests now uses the shared
RISK_MATRIX_JSON_DEFINITIONconstant. Note that authenticated tests appropriately use imported library-based risk matrices, which is the correct approach for those test scenarios.Also applies to: 34-38, 71-75, 98-102
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/metrology/models.py (2)
159-164: Potential N+1 query or AttributeError insave()method.The
save()method checksself.metric_definition_idbut then directly accessesself.metric_definition.default_target. If themetric_definitionrelation isn't already loaded, this could trigger an additional database query for each save operation. In bulk operations, this could lead to N+1 query problems.🔎 More efficient approach
def save(self, *args, **kwargs): # Inherit default_target from metric_definition if target_value is not set if self.target_value is None and self.metric_definition_id: - if self.metric_definition.default_target is not None: - self.target_value = self.metric_definition.default_target + # Only fetch if not already loaded + if not hasattr(self, '_metric_definition_cache'): + from metrology.models import MetricDefinition + metric_def = MetricDefinition.objects.only('default_target').get( + id=self.metric_definition_id + ) + if metric_def.default_target is not None: + self.target_value = metric_def.default_target + elif self.metric_definition.default_target is not None: + self.target_value = self.metric_definition.default_target super().save(*args, **kwargs)Alternatively, ensure callers use
select_related('metric_definition')when bulk-saving instances.
283-295: Hardcoded unit display mappings reduce maintainability.Lines 289-292 hardcode unit name to display symbol mappings (
"percentage"→"%","request_per_second"→"RPS"). This approach doesn't scale as new units are added and couples display logic to specific terminology names.💡 Recommended approach
Consider adding a
display_symbolfield to theTerminologymodel for metric units, or use a configuration mapping:Option 1: Add to Terminology model
# In Terminology model display_symbol = models.CharField(max_length=20, blank=True, null=True)Option 2: Use a configurable mapping
# At module level or in settings UNIT_DISPLAY_SYMBOLS = { "percentage": "%", "request_per_second": "RPS", "bytes": "B", "milliseconds": "ms", # ... extensible without code changes } # In display_value() unit = metric_definition.unit.name display_symbol = UNIT_DISPLAY_SYMBOLS.get(unit, unit) return f"{result} {display_symbol}"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/metrology/models.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: build_enterprise_frontend
- GitHub Check: build_community_frontend
- GitHub Check: build (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: test (3.12)
- GitHub Check: Analyze (python)
🔇 Additional comments (2)
backend/metrology/models.py (2)
826-910: Widget validation logic is comprehensive and correct.The
clean()method properly enforces:
- Text widgets must not have metric fields (lines 831-849)
- Non-text widgets must have exactly one metric type (lines 851-872)
- Builtin metrics must have all required fields (lines 874-895)
- Grid positioning must respect 12-column boundaries (lines 897-909)
The validation correctly handles mutual exclusivity and completeness constraints.
912-924: LGTM: Defensive retention configuration with sensible defaults.The function correctly:
- Reads from global settings with a reasonable 2-year default
- Enforces a minimum of 1 day to prevent data loss
- Handles all expected exceptions gracefully
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/resilience/serializers.py (1)
126-126: Remove debug print statement.The
print(validated_data)statement is a debug artifact that should be removed before merging. Debug print statements can leak sensitive data to logs and clutter production output.🔎 Proposed fix
- print(validated_data)backend/core/models.py (1)
4836-4879: RiskAssessment.save does not persist new instances (super().save only runs when pk is set)As written,
RiskAssessment.save()only callssuper().save()insideif self.pk:. For newly created instances (self.pkisNone), the method will skip thetransaction.atomic()block entirely and go straight toself.upsert_daily_metrics(), meaning the object is never actually saved to the database. That’s a correctness blocker regardless of the new metrics behavior.You can fix this by always saving the instance once, and only applying the “matrix_changed” clamping logic on updates:
Proposed fix for
RiskAssessment.save- def save(self, *args, **kwargs) -> None: - old_matrix_id = None - if self.pk: - old_matrix_id = ( - RiskAssessment.objects.filter(pk=self.pk) - .values_list("risk_matrix_id", flat=True) - .first() - ) - - matrix_changed = ( - old_matrix_id is not None and old_matrix_id != self.risk_matrix_id - ) - - with transaction.atomic(): - super().save(*args, **kwargs) - - if matrix_changed: - probabilities = list(range(len(self.risk_matrix.probability or []))) - impacts = list(range(len(self.risk_matrix.impact or []))) - - min_prob, max_prob = min(probabilities), max(probabilities) - min_impact, max_impact = min(impacts), max(impacts) - - fields = [ - ("current_proba", min_prob, max_prob), - ("current_impact", min_impact, max_impact), - ("residual_proba", min_prob, max_prob), - ("residual_impact", min_impact, max_impact), - ("inherent_proba", min_prob, max_prob), - ("inherent_impact", min_impact, max_impact), - ] - - for scenario in self.risk_scenarios.all(): - for field_name, min_val, max_val in fields: - value = getattr(scenario, field_name) - if value < 0: # keep “not rated” - continue - setattr( - scenario, field_name, max(min_val, min(value, max_val)) - ) - scenario.save() - - self.upsert_daily_metrics() + def save(self, *args, **kwargs) -> None: + old_matrix_id = None + matrix_changed = False + if self.pk: + old_matrix_id = ( + RiskAssessment.objects.filter(pk=self.pk) + .values_list("risk_matrix_id", flat=True) + .first() + ) + matrix_changed = ( + old_matrix_id is not None and old_matrix_id != self.risk_matrix_id + ) + + with transaction.atomic(): + super().save(*args, **kwargs) + + if matrix_changed: + probabilities = list(range(len(self.risk_matrix.probability or []))) + impacts = list(range(len(self.risk_matrix.impact or []))) + + min_prob, max_prob = min(probabilities), max(probabilities) + min_impact, max_impact = min(impacts), max(impacts) + + fields = [ + ("current_proba", min_prob, max_prob), + ("current_impact", min_impact, max_impact), + ("residual_proba", min_prob, max_prob), + ("residual_impact", min_impact, max_impact), + ("inherent_proba", min_prob, max_prob), + ("inherent_impact", min_impact, max_impact), + ] + + for scenario in self.risk_scenarios.all(): + for field_name, min_val, max_val in fields: + value = getattr(scenario, field_name) + if value < 0: # keep “not rated” + continue + setattr( + scenario, + field_name, + max(min_val, min(value, max_val)), + ) + scenario.save() + + self.upsert_daily_metrics()
♻️ Duplicate comments (1)
frontend/src/lib/utils/table.ts (1)
2368-2368: Fix missing "metrology/" prefix on metric filter endpoints.Several metrology filter endpoints are inconsistent with the rest of the metrology configuration and lack the required
metrology/prefix:
- Line 2368:
metric-definitions/category→ should bemetrology/metric-definitions/category- Line 2388:
metric-definitions/provider→ should bemetrology/metric-definitions/provider- Line 2402:
metric-definitions→ should bemetrology/metric-definitions- Line 2410:
metric-instances/status→ should bemetrology/metric-instances/statusIn contrast, dashboard-related filters at lines 2462, 2470, 2478, 2496, and 2511 correctly use the
metrology/prefix. This inconsistency may cause API routing failures.🔎 Proposed fix
category: { component: AutocompleteSelect, props: { - optionsEndpoint: 'metric-definitions/category', + optionsEndpoint: 'metrology/metric-definitions/category', optionsLabelField: 'label', optionsValueField: 'value', label: 'category', browserCache: 'force-cache', multiple: true } }, // ... library filter ... provider: { ...PROVIDER_FILTER, props: { ...PROVIDER_FILTER.props, - optionsEndpoint: 'metric-definitions/provider' + optionsEndpoint: 'metrology/metric-definitions/provider' } },metric_definition: { component: AutocompleteSelect, props: { - optionsEndpoint: 'metric-definitions', + optionsEndpoint: 'metrology/metric-definitions', label: 'metricDefinition', multiple: true } }, status: { component: AutocompleteSelect, props: { - optionsEndpoint: 'metric-instances/status', + optionsEndpoint: 'metrology/metric-instances/status', optionsLabelField: 'label', optionsValueField: 'value', label: 'status', browserCache: 'force-cache', multiple: true } },Also applies to: 2388-2388, 2402-2402, 2410-2410
🧹 Nitpick comments (6)
frontend/messages/fr.json (1)
2286-2286: Shortened translation may reduce clarity.The translation was shortened from "Définitions des métriques" to just "Définitions". While this may be intentional for UI brevity (e.g., tab label within a metrics context), ensure users will understand this refers specifically to metric definitions when displayed.
frontend/messages/en.json (1)
2710-2727: Minor inconsistency: "Builtin" vs "builtin" capitalization.There's a capitalization inconsistency between the keys:
- Line 2721:
"builtinMetricSample": "Builtin metric sample"(capitalized)- Line 2724:
"builtinMetric": "builtin metric"(lowercase)For consistency with the pattern at line 2721 and line 2723 (
"customMetric": "Custom metric"), consider standardizing:🔎 Suggested fix for consistency
- "builtinMetric": "builtin metric", + "builtinMetric": "Builtin metric",backend/core/models.py (4)
509-512: Library metric_definitions import looks solid; consider minor robustness tweaksThe new
metric_definitionshandling andupdate_metric_definitions()wiring intoupdate_library()are coherent and match the existing threats/reference_controls pattern. Two small robustness nits you might consider (non‑blocking):
When normalizing URNs, also overwrite the value in
defaultsto keep the stored value canonical:normalized_urn = metric_definition["urn"].lower() metric_definition = {**metric_definition, "urn": normalized_urn}If
unitlookup by name fails, we silently storeunit=None. If libraries are expected to always reference a valid unit, logging a warning (or at leasticduring development) on lookup miss would help diagnose library content issues.Overall behavior is correct; these are just resilience improvements.
Also applies to: 573-599, 1189-1192
1338-1338: Metric units terminology + translation handling are consistent; could be centralizedAdding
FieldPath.METRIC_UNIT,DEFAULT_METRIC_UNITS, andcreate_default_metric_units()lines up nicely withMetricDefinition.unit.limit_choices_toand the new metric subsystem. The updatedget_name_translatedcorrectly supports both legacy string translations and the new{"fr": {"name": "..."} }structure, with a sensible fallback toself.name.capitalize().If you find more places needing this “dict-or-string translation with nested
namekey” logic (e.g., other referential models), consider extracting a small helper (e.g.,resolve_i18n_name(self.translations, self.name)) to avoid subtle divergence between implementations over time. Not urgent, just a DRY/readability win.Also applies to: 1625-1682, 1755-1762, 1764-1773
4807-4811: BuiltinMetricSample hooks are correct; watch potential extra load on hot pathsHooking
BuiltinMetricSample.update_or_create_snapshot()into:
RiskAssessment.upsert_daily_metrics()ComplianceAssessment.upsert_daily_metrics()FindingsAssessment.upsert_daily_metrics()(via its newsave()override)gives you a nice, centralized way to keep the daily builtin metrics snapshots in sync with the domain models. The call sites are appropriate and reuse existing “upsert_daily_metrics” entry points.
The only caveat is performance: each save on these models (and on
RiskScenario.save()/RequirementAssessment.save()/Finding.save()which call into the upsert methods) now triggers a recompute for the whole object’s metric set. For large assessments or very chatty update patterns this could add noticeable DB and CPU load.If that ever becomes an issue, a future enhancement could be to:
- debounce these updates via a lightweight queue (e.g., schedule a Celery task keyed by
(model, id, date)), or- gate the snapshot update behind a cheap “have fields that affect metrics changed?” check.
For now, behavior is correct; this is just something to keep in mind if you see hotspots around these saves.
Also applies to: 5758-5762, 6898-6907
6995-7002: Finding → FindingsAssessment metrics propagation is correct; may be chatty for bulk updatesThe new
Finding.save()override correctly:
- saves the finding,
- bumps the parent
FindingsAssessment.updated_atvia a directupdate()(avoiding recursive saves),- and calls
self.findings_assessment.upsert_daily_metrics(), which in turn updates theBuiltinMetricSamplesnapshot.Functionally this is what you want, but note that any batch-ish operations that iterate
Finding.save()in a loop will now recompute the entire findings assessment metrics once per finding. If you expect bulk edits, consider (later) either:
- adding a
skip_metricskwarg for callers that will explicitly refresh once at the end, or- introducing a small “dirty flag” and a periodic/task-based recomputation.
Not mandatory for this PR, but worth being aware of.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
backend/ciso_assistant/settings.py(1 hunks)backend/core/models.py(10 hunks)backend/core/startup.py(3 hunks)backend/core/urls.py(1 hunks)backend/resilience/serializers.py(1 hunks)enterprise/backend/enterprise_core/settings.py(1 hunks)frontend/messages/en.json(7 hunks)frontend/messages/fr.json(5 hunks)frontend/src/lib/components/Forms/ModelForm/AssetAssessmentForm.svelte(1 hunks)frontend/src/lib/utils/table.ts(3 hunks)frontend/src/lib/utils/types.ts(1 hunks)frontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.svelte(1 hunks)frontend/src/routes/(app)/(internal)/stored-libraries/[id=uuid]/+page.svelte(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- frontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.svelte
- backend/core/urls.py
- enterprise/backend/enterprise_core/settings.py
- backend/ciso_assistant/settings.py
- frontend/src/routes/(app)/(internal)/stored-libraries/[id=uuid]/+page.svelte
- backend/core/startup.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/resilience/serializers.py (1)
backend/core/serializer_fields.py (1)
FieldsRelatedField(26-96)
backend/core/models.py (2)
backend/core/views.py (8)
get(1113-1120)get(6744-6745)filter(190-216)field_path(11726-11727)update(775-808)update(5169-5181)update(5431-5444)update(8952-8955)backend/metrology/models.py (5)
MetricDefinition(19-92)BuiltinMetricSample(314-588)update_or_create_snapshot(366-390)save(159-164)save(299-303)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (python)
🔇 Additional comments (15)
frontend/messages/fr.json (3)
67-68: LGTM - New metrology navigation entries.The translations for the new metrology feature navigation are appropriate. "Métriques" is a suitable French translation for the feature, and the description clearly explains the functionality.
2952-2952: Translation includes additional context not in the English key.The English key
showLegendis translated as "Afficher le titre et la légende" which includes "titre" (title). This might be intentional if the feature actually shows both title and legend together, but verify this matches the actual UI behavior to avoid confusion.
2938-3022: New metrology translation keys look good.The translations for the new dashboard and metrics feature are well-formed and follow French grammar conventions. Key observations:
- "Plus élevé est mieux" for
higherIsBetteris clear- Metric-related terminology (
métrique,tableau de bord,widget) is consistent- Widget configuration terms are properly translated
frontend/messages/en.json (8)
68-69: LGTM! Clear metrology section introduction.The metrology section introduction is well-structured with a clear description of the feature's purpose.
266-266: LGTM! Consistent with import key patterns.The import key follows the established naming convention and is grammatically correct.
475-475: LGTM! Improved clarity for users.The label change from "BIA assessment" to "Asset impacts assessment" improves clarity, making it more accessible to users who may not be familiar with the BIA acronym.
1875-1876: LGTM! Clear value terminology.The display value and current value keys are well-defined and appropriate for their use cases.
2728-2764: LGTM! Well-structured dashboard and widget keys.The dashboard and widget keys are comprehensive, consistently named, and provide clear user-facing text for the metrology UI components.
2765-2775: LGTM! Clear metric configuration keys with helpful descriptions.The metric configuration keys are well-defined with informative help text that guides users on how to use these settings effectively.
2913-2927: LGTM! Comprehensive ordered entry list UI keys.The ordered entry list keys provide a complete set of translations for the UI component, including accessibility labels and debug information. The metric type keys (qualitative, quantitative, etc.) are clear and appropriate.
3019-3024: LGTM! Clear final metrology descriptions and improved cascade hint.The metrology feature descriptions are concise and informative. The enhanced cascade affected hint provides clear guidance on the impact of deletion operations.
frontend/src/lib/utils/table.ts (2)
2351-2517: Well-structured metrology configurations.The seven new metrology-related list view configurations follow a consistent pattern with appropriate head/body field mappings and filters. The use of shared filter constants (DOMAIN_FILTER, LABELS_FILTER) maintains consistency with existing configurations.
1777-1778: Nested asset property access is correctly configured.The backend
AssetAssessmentReadSerializerreturns theassetfield as a nested object with the fields["id", "str", "ref_id"], which correctly supports the frontend's access toasset.ref_idandasset.strat lines 1777-1778.frontend/src/lib/utils/types.ts (1)
127-135: LGTM! Metrology URL models added correctly.The seven new metrology URL model entries are properly integrated following the existing naming convention (kebab-case) and are appropriately grouped with a comment. These entries align with the listViewFields configurations in
table.ts.backend/resilience/serializers.py (1)
44-44: No changes needed. Theassetfield specification follows the established pattern used throughout the serializer for related fields (e.g.,asset_class,bia). The fields["id", "str", "ref_id"]provide sufficient asset identification and align with the codebase's API design approach of explicitly controlling nested object representation.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
backend/metrology/migrations/0001_initial.py (2)
347-354: Consider addingrelated_nameto themetric_definitionForeignKey.The
metric_definitionFK lacks an explicitrelated_name. Django will auto-generatemetricinstance_set, but adding an explicit name likerelated_name="instances"would improve code clarity when querying metric instances from a definition:# With explicit related_name="instances": definition.instances.all() # clearer intent # vs auto-generated: definition.metricinstance_set.all()
592-656: Consider adding a compound index on(metric_instance, timestamp)for query performance.Time-series queries will typically filter by
metric_instanceand order bytimestamp. Whiletimestamphasdb_index=True, a compound index would significantly improve performance for the common query pattern:CustomMetricSample.objects.filter(metric_instance=instance).order_by('-timestamp')🔎 Proposed index addition
options={ "verbose_name": "Custom metric sample", "verbose_name_plural": "Custom metric samples", "ordering": ["-timestamp"], "indexes": [ models.Index( fields=["metric_instance", "timestamp"], name="metrology_c_metric_ts_idx", ) ], },
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/core/migrations/0120_alter_terminology_field_path.py(1 hunks)backend/metrology/migrations/0001_initial.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/metrology/migrations/0001_initial.py (2)
backend/core/migrations/0120_alter_terminology_field_path.py (1)
Migration(6-28)backend/iam/models.py (2)
Folder(80-328)get_root_folder_id(92-97)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (38)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/settings/general.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/findings-assessments.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/tprm.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/compliance-assessments.tes...
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/common.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/ebios-rm.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/settings/sso.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/startup.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/business-impact-analysis.t...
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/my-assignments.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/startup.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/incidents.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/login.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/libraries.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/analytics.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/detailed/mappings.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/common.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/incidents.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/login.test.ts)
- GitHub Check: enterprise-functional-tests (3.12, chromium, tests/functional/user-route.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/my-assignments.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/settings/general.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/analytics.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/settings/sso.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/business-impact-analysis.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/user-route.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/libraries.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/findings-assessments.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/mappings.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/ebios-rm.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/compliance-assessments.test.ts)
- GitHub Check: functional-tests (3.12, chromium, tests/functional/detailed/tprm.test.ts)
- GitHub Check: test (3.12)
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: startup-docker-compose-test
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: build (3.12)
🔇 Additional comments (6)
backend/core/migrations/0120_alter_terminology_field_path.py (2)
6-9: LGTM!The migration dependency chain is correctly specified.
11-27: Verify that no existing Terminology records have field_path values outside the new choices.This migration updates the
field_pathfield choices to addmetric_definition.unit. The new choices now include:ro_to.risk_origin,qualifications,accreditation.status,accreditation.category,entity.relationship, andmetric_definition.unit.While this migration does not change the database at all — it exists only for Django's internal state tracking, confirm that all existing Terminology records have
field_pathvalues present in this new choices list. Existing records with values outside this list could cause validation errors when accessed via forms or the ORM, even though the migration itself is safe.backend/metrology/migrations/0001_initial.py (4)
10-18: LGTM - Migration dependencies are correctly configured.The dependencies properly reference the required migrations from
contenttypes,core(including the Terminology field_path change formetric_definition.unit),iam, and the swappableAUTH_USER_MODEL.
657-732: LGTM - BuiltinMetricSample is well designed.Good design choices:
- Compound index on
(content_type, object_id, date)optimizes the common lookup patternUniqueConstraintprevents duplicate samples per object per day- Separate indexes on
object_idanddatesupport additional query patterns- No
folderFK is appropriate since these are system-generated samples
134-137: Reconsider usingnull=Trueon thenameCharField.Django convention is to use an empty string, not NULL, as the "no data" state for string-based fields. The
namefield allowingnull=Trueis inconsistent with this pattern and creates two possible "no data" states. The only exception is when CharField has both unique=True and blank=True set, which requires null=True to avoid unique constraint violations. Verify whether this exception applies to yournamefield; if not, removenull=Trueand rely on empty strings instead.
562-584: Widget data source validation is already enforced.The DashboardWidgetWriteSerializer.validate() method already ensures that non-text widgets must have either metric_instance or target_content_type (but not both, and not neither). It raises a ValidationError with "Please select either a custom metric instance or a builtin metric..." Text widgets are correctly allowed to skip metric configuration. The suggested validation is already implemented at the application level.
Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
backend/library/core_cybersecurity_metrics.yaml (1)
15-294: Excellent metric definitions with comprehensive coverage.The 25 metric definitions are well-structured and cover essential cybersecurity domains:
- Quantitative metrics (19): All have appropriate units (percentage, count, days, hours) and sensible
higher_is_betterflags- Qualitative metrics (6): Follow consistent 5-level maturity model (Initial → Developing → Defined → Managed → Optimized)
- Naming conventions: URNs and ref_ids follow consistent patterns
- Targets: Default targets are aspirational (e.g., 0.0 for incidents, 100% for MFA) but align with industry best practices
The aggressive default targets (0.0 incidents/vulnerabilities, 100% coverage) are defensible as goals and can be customized per instance through the metrology framework.
💡 Optional: Consider documenting target customization guidance
While the default targets represent best-practice goals, organizations may want guidance on customizing them based on:
- Organizational maturity and risk appetite
- Industry benchmarks and peer comparisons
- Resource constraints and compliance requirements
Consider adding a README or inline comments explaining that these are aspirational defaults.
frontend/messages/fr.json (1)
2286-2286: Verify shortened translation maintains clarity in UI contextThe translation for
metricsDefinitionswas shortened from "Définitions des métriques" to just "Définitions". While this may be intentional for UI space constraints, the loss of the qualifier "des métriques" could reduce clarity if this label appears without sufficient surrounding context.Please verify that this abbreviated form remains clear and unambiguous in all UI contexts where it's displayed (menus, buttons, page titles, etc.).
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
tools/metrics/core_cybersecurity_metrics.xlsxis excluded by!**/*.xlsx
📒 Files selected for processing (3)
backend/library/core_cybersecurity_metrics.yaml(1 hunks)frontend/messages/fr.json(5 hunks)frontend/src/routes/(app)/(internal)/libraries/+page.svelte(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: build (3.12)
- GitHub Check: test (3.12)
- GitHub Check: build_community_frontend
- GitHub Check: build_enterprise_frontend
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: Analyze (python)
🔇 Additional comments (4)
backend/library/core_cybersecurity_metrics.yaml (1)
1-14: LGTM! Library metadata is well-structured.The library metadata follows the expected format with all required fields present (URN, locale, version, dates, etc.). The publication date of December 13, 2025 is reasonable for a library prepared in advance of release.
frontend/messages/fr.json (3)
67-68: LGTM - New metrology translationsThe new metrology feature translations are clear and well-written. The description effectively conveys the purpose of tracking metrics and KPIs on customizable dashboards.
Also applies to: 258-258
1793-1793: LGTM - Improved translation clarityThe updated translation from "Focus" to "Analyse d'impact de l'actif" is more descriptive and accurately conveys the purpose of the asset assessment feature in the BIA context.
2938-3022: LGTM - Comprehensive metrology feature translationsThe addition of 84 new translation keys for the metrology feature is well-structured and consistent. The translations cover:
- Metric definitions and instances
- Dashboard and widget configuration
- Sample collection and tracking
- UI labels and help text
The French translations are grammatically correct and follow the established patterns in the file.
Recommendation: Verify completeness by checking that:
- All UI strings in the metrology feature code reference these translation keys
- No English fallback strings are hardcoded in the implementation
- All user-facing text for this feature is properly internationalized
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
frontend/messages/en.json (1)
1874-1876: Remove unused translation keydisplayValue.The key
"displayValue": "Value"(line 1875) does not appear to be referenced anywhere in the codebase as a translation key. In contrast,"currentValue": "Current Value"(line 1876) is actively used in the metric-instances page. ThedisplayValuestring variable names found in Svelte components are unrelated component-level variables, not references to this i18n key.Consider removing this unused translation key to reduce maintenance burden.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
frontend/messages/en.json(6 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: build_enterprise_frontend
- GitHub Check: build_community_frontend
- GitHub Check: startup-functional-test (3.12)
- GitHub Check: startup-docker-compose-test
- GitHub Check: enterprise-startup-docker-compose-test
- GitHub Check: enterprise-startup-functional-test (3.12)
- GitHub Check: test (3.12)
- GitHub Check: build (3.12)
- GitHub Check: Analyze (python)
🔇 Additional comments (4)
frontend/messages/en.json (4)
68-69: LGTM: Core metrology keys are well-defined.The metrology feature keys follow the established naming conventions and provide clear, user-friendly descriptions. The description effectively communicates the purpose (tracking KPIs/KRIs and dashboard visualization).
Also applies to: 266-266
2710-2775: LGTM: Comprehensive metric and dashboard localization keys.The extensive set of keys for the metrology feature demonstrates:
- Consistent naming patterns (singular/plural pairs, "add" prefixes)
- Clear help text for complex concepts (e.g.,
higherIsBetterHelpText)- Logical grouping (metrics, dashboards, widgets, configuration)
- User-friendly terminology (e.g., "editLayout" vs "editAttributes" for different editing modes)
The implementation covers the full feature surface as described in the PR objectives.
2923-2927: LGTM: Type definitions and feature descriptions are clear and consistent.The remaining metrology keys demonstrate:
- Clear type distinctions (qualitative vs quantitative metrics)
- Proper use of message format for pluralization (line 3019)
- Consistent description style matching other features
- Generic utility keys (e.g., "count") that promote reusability
The localization additions comprehensively support the new metrology feature set.
Also applies to: 3019-3023
2913-2922: Debug key is intentional and properly scoped.The
orderedEntryListDebugTitlekey at line 2919 is used in the OrderedEntryList component and is controlled by adebugprop that defaults tofalse. The debug output only appears when explicitly enabled, so this is not user-facing in normal operation. No changes needed.
Summary by CodeRabbit
New Features
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.