ECOPROJECT-4321 | feat: change collected disk size tiers#1062
ECOPROJECT-4321 | feat: change collected disk size tiers#1062amalimov wants to merge 1 commit intokubev2v:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/hold |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds DiskComplexityTier support across API, inventory model, DuckDB parsing/templates, estimation logic, service wiring, and tests; regenerates embedded OpenAPI/spec payloads. Estimation now prefers DiskComplexityTier and falls back to DiskSizeTier when absent. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Handler as EstimationHandler
participant Service as EstimationService
participant Parser as DuckDBParser
participant DB as DuckDB
Client->>Handler: POST /calculate-migration-complexity
Handler->>Service: CalculateMigrationComplexity(cluster)
Service->>Parser: DiskComplexityTierDistribution(ctx, filters)
Parser->>Parser: build query via DiskComplexityTierQuery
Parser->>DB: execute disk_complexity_tier_query
DB-->>Parser: rows (tier, vm_count, total_size_tb)
Parser-->>Service: map[tier]DiskSizeTierSummary
Service->>Service: buildComplexityResult(use DiskComplexityTier or fallback to DiskSizeTier)
Service-->>Handler: ComplexityResult
Handler-->>Client: HTTP response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
7baefa0 to
7147125
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/v1alpha1/openapi.yaml`:
- Around line 1743-1747: The OpenAPI description for the top disk-complexity
tier currently reads ">50TiB" but the runtime/emitted tier key is "50+TiB";
update the documented label in the description under the component referencing
diskSizeTierSummary (`#/components/schemas/diskSizeTierSummary` and the
surrounding description block) to use the canonical "50+TiB" string so the spec
matches the emitted keys and avoids client-side mismatches.
In `@internal/service/estimation_test.go`:
- Around line 392-418: The test currently only asserts result.ComplexityByDisk
has length 4; instead verify that the fallback from DiskSizeTier correctly
propagated the expected counts/sizes: after calling
estimationSrv.CalculateMigrationComplexity, iterate the expected tiers you built
with buildDiskSizeTier (e.g., the "Easy (0-10TB)" entry) and assert that
result.ComplexityByDisk contains entries matching those tier names and that each
entry's VmCount and TotalSize (or TotalSizeTB) equal the values from
oldStyleTier; update the test (inside the It block) to fail if any expected tier
is missing or its VmCount/TotalSize differ so assertions are specific and
informative.
- Around line 33-45: The shared fixture createTestInventoryForComplexity
currently hardcodes DiskComplexityTier to {"0-10TiB"} which prevents tests from
exercising tier-mapping and fallback logic; update the function so it builds
DiskComplexityTier from the passed-in diskSizeTier parameter (or returns nil
when diskSizeTier is nil) instead of using the fixed diskComplexityTier map,
ensuring the inventory's DiskComplexityTier field reflects the input disk size
tiers and thus lets tests cover real mapping/fallback behavior.
In `@pkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmpl`:
- Line 16: The template currently injects user input directly into SQL via
'{{.ClusterFilter}}' (in disk_complexity_tier_query.go.tmpl, affecting
i."Cluster"), which allows SQL injection and breaks names with quotes; change
the template to stop inlining ClusterFilter and instead use a bind placeholder
(e.g., $N or ?) for the cluster value and pass ClusterFilter as a parameter from
the code that renders/executes the query, or if placeholders are not supported,
call a safe escaping helper (e.g., sqlEscapeString) on .ClusterFilter before
inserting it; ensure callers that construct the query append ClusterFilter to
the query arguments (the code that renders disk_complexity_tier_query.go.tmpl)
so the value is passed safely rather than interpolated.
In `@pkg/estimations/complexity/complexity_test.go`:
- Around line 167-189: Add unit tests for ScoreDiskTierLabel to cover the new
TiB-labelled keys: call ScoreDiskTierLabel with "0-10TiB", "10-20TiB",
"20-50TiB", and "50+TiB" and assert they return 1, 2, 3, and 4 respectively
(mirroring DiskSizeRangeRatings mappings). Place these assertions alongside the
existing DiskSizeRangeRatings tests so the mapping between DiskSizeRangeRatings
and ScoreDiskTierLabel is directly validated; use the ScoreDiskTierLabel
function name to locate the implementation and add matching test cases for each
new label.
In `@pkg/estimations/complexity/complexity.go`:
- Around line 199-206: DiskSizeRangeRatings currently hardcodes tier scores that
duplicate the canonical DiskSizeScores; change DiskSizeRangeRatings to build and
return its map by iterating or transforming DiskSizeScores so the API labels
("0-10TiB", "10-20TiB", "20-50TiB", "50+TiB") are derived from that single
source of truth (DiskSizeScores) instead of being duplicated, ensuring
DiskSizeRangeRatings returns a map[string]Score produced from DiskSizeScores.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8159c252-530f-420b-a0e9-51e32fd3f2f3
📒 Files selected for processing (17)
api/v1alpha1/agent/spec.gen.goapi/v1alpha1/openapi.yamlapi/v1alpha1/spec.gen.goapi/v1alpha1/types.gen.gointernal/handlers/v1alpha1/estimation_test.gointernal/service/estimation.gointernal/service/estimation_test.gopkg/duckdb_parser/builder.gopkg/duckdb_parser/inventory_builder.gopkg/duckdb_parser/inventory_builder_test.gopkg/duckdb_parser/queries.gopkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmplpkg/duckdb_parser/templates/disk_size_tier_query.go.tmplpkg/estimations/complexity/complexity.gopkg/estimations/complexity/complexity_test.gopkg/inventory/converters/to_api.gopkg/inventory/model.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (5)
api/v1alpha1/openapi.yaml (1)
1743-1747:⚠️ Potential issue | 🟡 MinorUse the canonical top-tier label in the description (
50+TiB).The description currently says
>50TiB, but emitted tier keys are50+TiB. Please align the documented bucket string to prevent client key mismatches.Suggested fix
diskComplexityTier: type: object description: > Distribution of VMs across disk-complexity tiers used by the estimation engine - (0-10TiB, 10-20TiB, 20-50TiB, >50TiB). + (0-10TiB, 10-20TiB, 20-50TiB, 50+TiB). additionalProperties: $ref: '#/components/schemas/diskSizeTierSummary'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/openapi.yaml` around lines 1743 - 1747, The description for the disk complexity tiers in the OpenAPI schema uses the label ">50TiB" which doesn't match emitted tier keys ("50+TiB"); update the description text in the schema block that documents the distribution of VMs (the description field paired with additionalProperties: $ref: '#/components/schemas/diskSizeTierSummary') to use the canonical top-tier label "50+TiB" so clients and docs use the same bucket string.pkg/estimations/complexity/complexity.go (1)
196-206: 🧹 Nitpick | 🔵 TrivialAvoid duplicating tier-score definitions in two maps.
DiskSizeRangeRatings()re-hardcodes values already represented inDiskSizeScores, which can drift.♻️ Proposed refactor
func DiskSizeRangeRatings() map[string]Score { - // Only return entries for the new TiB-labelled complexity tiers - return map[string]Score{ - "0-10TiB": 1, - "10-20TiB": 2, - "20-50TiB": 3, - "50+TiB": 4, - } + result := make(map[string]Score, 4) + for label, score := range DiskSizeScores { + if strings.HasSuffix(label, "TiB") { + result[label] = score + } + } + return result }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/estimations/complexity/complexity.go` around lines 196 - 206, DiskSizeRangeRatings currently duplicates the tier-to-score mapping that already lives in DiskSizeScores; change DiskSizeRangeRatings to build and return its map by referencing DiskSizeScores (e.g., selecting the TiB-labelled keys such as "0-10TiB","10-20TiB","20-50TiB","50+TiB") instead of re-hardcoding numbers so updates to DiskSizeScores automatically propagate and avoid drift; locate DiskSizeRangeRatings and replace the literal map with a small routine that reads from DiskSizeScores and returns only the TiB keys.pkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmpl (1)
16-16:⚠️ Potential issue | 🔴 CriticalUnescaped cluster filter is still SQL-injected into the query.
'{{.ClusterFilter}}'remains unsafe and can both break valid quoted names and permit injection. This should use bound parameters, or at minimum a dedicated SQL-escaping field populated by the builder.As per coding guidelines "Security: Be vigilant against vulnerabilities ... Sanitize inputs."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmpl` at line 16, The template currently injects .ClusterFilter directly into SQL (i."Cluster" = '{{.ClusterFilter}}'), risking SQL injection; change the template to use a bound parameter placeholder instead (e.g., i."Cluster" = $CLUSTER_FILTER or i."Cluster" = ?) and update the code that renders this template (the query builder that supplies parameters for disk_complexity_tier_query.go.tmpl) to append .ClusterFilter to the prepared-statement args list (or provide a sanitized .ClusterFilterEscaped field from the builder), ensuring the runtime uses parameter binding rather than string interpolation.internal/service/estimation_test.go (2)
33-45:⚠️ Potential issue | 🟠 MajorHardcoded
DiskComplexityTierin test fixture hides the behavior under test.On Line 34–36 and Line 44,
createTestInventoryForComplexityalways injects the same complexity tier, so tests that provide customdiskSizeTierare not actually validating tier mapping/fallback paths.Suggested refactor
-func createTestInventoryForComplexity(clusterID string, osInfo *map[string]api.OsInfo, diskSizeTier *map[string]api.DiskSizeTierSummary) []byte { - diskComplexityTier := map[string]api.DiskSizeTierSummary{ - "0-10TiB": {VmCount: 125, TotalSizeTB: 8.5}, - } +func createTestInventoryForComplexity( + clusterID string, + osInfo *map[string]api.OsInfo, + diskSizeTier *map[string]api.DiskSizeTierSummary, + diskComplexityTier *map[string]api.DiskSizeTierSummary, +) []byte { inventory := api.Inventory{ Clusters: map[string]api.InventoryData{ clusterID: { Vms: api.VMs{ Total: 10, OsInfo: osInfo, DiskSizeTier: diskSizeTier, - DiskComplexityTier: &diskComplexityTier, + DiskComplexityTier: diskComplexityTier, DiskGB: api.VMResourceBreakdown{Total: 100}, CpuCores: api.VMResourceBreakdown{Total: 40}, RamGB: api.VMResourceBreakdown{Total: 80}, }, }, }, }As per coding guidelines, "Coverage: Strive for high test coverage on critical logic paths."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 33 - 45, The test helper createTestInventoryForComplexity currently hardcodes diskComplexityTier, which masks behavior under test; modify the function to stop injecting the fixed diskComplexityTier and instead derive or pass through the caller-provided tier: use the provided diskSizeTier (or an additional optional parameter) to construct DiskComplexityTier, or set DiskComplexityTier to nil when no custom tier is supplied so tests can exercise mapping/fallback paths; update references to DiskComplexityTier in createTestInventoryForComplexity to use the supplied data rather than the hardcoded map.
392-418:⚠️ Potential issue | 🟠 MajorFallback test assertions are too weak for backward-compatibility coverage.
On Line 417, asserting only
HaveLen(4)can pass even when fallback from legacyDiskSizeTiermaps incorrect counts/sizes. Assert the expected propagated values (e.g., score-1 bucket getsVmCount=50,TotalSizeTB=3.0).Strengthen assertions
result, err := estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID) Expect(err).To(BeNil()) Expect(result.ComplexityByDisk).To(HaveLen(4)) + Expect(result.ComplexityByDisk[0].Score).To(Equal(1)) + Expect(result.ComplexityByDisk[0].VMCount).To(Equal(50)) + Expect(result.ComplexityByDisk[0].TotalSizeTB).To(Equal(3.0))As per coding guidelines, "Assertions must be specific and provide informative messages on failure."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 392 - 418, The test's final assertion on result.ComplexityByDisk is too weak (only checking length); update the It block that builds oldStyleTier and calls estimationSrv.CalculateMigrationComplexity to assert specific propagated values from the legacy DiskSizeTier: locate the oldStyleTier variable and the call to estimationSrv.CalculateMigrationComplexity, then replace/augment the HaveLen(4) check with assertions that the expected bucket (e.g., the score-1 / "Easy (0-10TB)" bucket in result.ComplexityByDisk) has VmCount == 50 and TotalSizeTB == 3.0 (and assert any other expected fields like DiskGB.Total if applicable) so the fallback mapping from DiskSizeTier to DiskComplexityTier is validated precisely.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/inventory/converters/to_api.go`:
- Around line 57-63: The conversion currently always allocates
diskComplexityTiers as a non-nil map even when source v.DiskComplexityTiers is
nil, losing nil/absent semantics; update the converter (the block that builds
diskComplexityTiers in pkg/inventory/converters/to_api.go and the analogous
block near the api.DiskSizeTierSummary usage at the other occurrence) to check
if v.DiskComplexityTiers == nil and leave diskComplexityTiers as nil in that
case, otherwise allocate the map and populate entries from v.DiskComplexityTiers
(mapping VMCount -> VmCount and TotalSizeTB -> TotalSizeTB) so consumers can
detect absence via a nil pointer.
---
Duplicate comments:
In `@api/v1alpha1/openapi.yaml`:
- Around line 1743-1747: The description for the disk complexity tiers in the
OpenAPI schema uses the label ">50TiB" which doesn't match emitted tier keys
("50+TiB"); update the description text in the schema block that documents the
distribution of VMs (the description field paired with additionalProperties:
$ref: '#/components/schemas/diskSizeTierSummary') to use the canonical top-tier
label "50+TiB" so clients and docs use the same bucket string.
In `@internal/service/estimation_test.go`:
- Around line 33-45: The test helper createTestInventoryForComplexity currently
hardcodes diskComplexityTier, which masks behavior under test; modify the
function to stop injecting the fixed diskComplexityTier and instead derive or
pass through the caller-provided tier: use the provided diskSizeTier (or an
additional optional parameter) to construct DiskComplexityTier, or set
DiskComplexityTier to nil when no custom tier is supplied so tests can exercise
mapping/fallback paths; update references to DiskComplexityTier in
createTestInventoryForComplexity to use the supplied data rather than the
hardcoded map.
- Around line 392-418: The test's final assertion on result.ComplexityByDisk is
too weak (only checking length); update the It block that builds oldStyleTier
and calls estimationSrv.CalculateMigrationComplexity to assert specific
propagated values from the legacy DiskSizeTier: locate the oldStyleTier variable
and the call to estimationSrv.CalculateMigrationComplexity, then replace/augment
the HaveLen(4) check with assertions that the expected bucket (e.g., the score-1
/ "Easy (0-10TB)" bucket in result.ComplexityByDisk) has VmCount == 50 and
TotalSizeTB == 3.0 (and assert any other expected fields like DiskGB.Total if
applicable) so the fallback mapping from DiskSizeTier to DiskComplexityTier is
validated precisely.
In `@pkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmpl`:
- Line 16: The template currently injects .ClusterFilter directly into SQL
(i."Cluster" = '{{.ClusterFilter}}'), risking SQL injection; change the template
to use a bound parameter placeholder instead (e.g., i."Cluster" =
$CLUSTER_FILTER or i."Cluster" = ?) and update the code that renders this
template (the query builder that supplies parameters for
disk_complexity_tier_query.go.tmpl) to append .ClusterFilter to the
prepared-statement args list (or provide a sanitized .ClusterFilterEscaped field
from the builder), ensuring the runtime uses parameter binding rather than
string interpolation.
In `@pkg/estimations/complexity/complexity.go`:
- Around line 196-206: DiskSizeRangeRatings currently duplicates the
tier-to-score mapping that already lives in DiskSizeScores; change
DiskSizeRangeRatings to build and return its map by referencing DiskSizeScores
(e.g., selecting the TiB-labelled keys such as
"0-10TiB","10-20TiB","20-50TiB","50+TiB") instead of re-hardcoding numbers so
updates to DiskSizeScores automatically propagate and avoid drift; locate
DiskSizeRangeRatings and replace the literal map with a small routine that reads
from DiskSizeScores and returns only the TiB keys.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: d42986e2-65f3-4722-932c-45a1d92ebec9
📒 Files selected for processing (17)
api/v1alpha1/agent/spec.gen.goapi/v1alpha1/openapi.yamlapi/v1alpha1/spec.gen.goapi/v1alpha1/types.gen.gointernal/handlers/v1alpha1/estimation_test.gointernal/service/estimation.gointernal/service/estimation_test.gopkg/duckdb_parser/builder.gopkg/duckdb_parser/inventory_builder.gopkg/duckdb_parser/inventory_builder_test.gopkg/duckdb_parser/queries.gopkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmplpkg/duckdb_parser/templates/disk_size_tier_query.go.tmplpkg/estimations/complexity/complexity.gopkg/estimations/complexity/complexity_test.gopkg/inventory/converters/to_api.gopkg/inventory/model.go
7147125 to
6b97bab
Compare
|
/remove-hold |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/service/estimation_test.go (1)
507-531:⚠️ Potential issue | 🟡 MinorTighten the assertion to the new two-field error contract.
Line 531 only checks
diskSizeTier, so this test would still pass if the service regressed to the old single-field error. AssertdiskComplexityTiertoo, or match the full message.Suggested fix
Expect(result).To(BeNil()) Expect(err).NotTo(BeNil()) Expect(err.Error()).To(ContainSubstring("diskSizeTier")) + Expect(err.Error()).To(ContainSubstring("diskComplexityTier")) })As per coding guidelines, "Assertions must be specific and provide informative messages on failure."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 507 - 531, The test in estimation_test.go for CalculateMigrationComplexity currently only asserts the error contains "diskSizeTier" which is too loose; update the assertion on err to ensure the new two-field error contract is enforced by checking that the error message contains both "diskSizeTier" and "diskComplexityTier" (or assert the full expected error string) after calling estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID) so the test fails if either field is missing from the error.
♻️ Duplicate comments (2)
pkg/inventory/converters/to_api.go (1)
57-63:⚠️ Potential issue | 🟠 MajorPreserve nil semantics for
DiskComplexityTierwhen source data is absent.This block always creates a non-nil map and always assigns a non-nil pointer. That removes absence semantics and can serialize an empty object instead of omitting the field.
Suggested fix
- diskComplexityTiers := make(map[string]api.DiskSizeTierSummary) - for tier, summary := range v.DiskComplexityTiers { - diskComplexityTiers[tier] = api.DiskSizeTierSummary{ - VmCount: summary.VMCount, - TotalSizeTB: summary.TotalSizeTB, - } - } + var diskComplexityTier *map[string]api.DiskSizeTierSummary + if v.DiskComplexityTiers != nil { + tiers := make(map[string]api.DiskSizeTierSummary, len(v.DiskComplexityTiers)) + for tier, summary := range v.DiskComplexityTiers { + tiers[tier] = api.DiskSizeTierSummary{ + VmCount: summary.VMCount, + TotalSizeTB: summary.TotalSizeTB, + } + } + diskComplexityTier = &tiers + } @@ - DiskComplexityTier: &diskComplexityTiers, + DiskComplexityTier: diskComplexityTier,Also applies to: 151-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/inventory/converters/to_api.go` around lines 57 - 63, The conversion currently always allocates diskComplexityTiers and thus loses nil/absence semantics; only create and populate the map when the source v.DiskComplexityTiers is non-nil and otherwise leave the target field nil. Concretely, guard the block with if v.DiskComplexityTiers != nil { diskComplexityTiers := make(map[string]api.DiskSizeTierSummary); for tier, summary := range v.DiskComplexityTiers { ... } ; assign the populated map (or its pointer if the target field is a *map) to the output }, and apply the same nil-guard fix to the other occurrence referenced (around line 151) so missing source data results in a nil target rather than an empty map.internal/service/estimation_test.go (1)
33-45:⚠️ Potential issue | 🟠 MajorShared helper still bypasses the code path this PR is changing.
createTestInventoryForComplexitynow always injects a fixedDiskComplexityTier, andCalculateMigrationComplexityprefers that field overDiskSizeTier(internal/service/estimation.goLine 200-217). That means every test built through this helper ignores thediskSizeTierinput, so the disk assertions below are validating the fixture rather than the tier-mapping/fallback behavior.Proposed refactor
-func createTestInventoryForComplexity(clusterID string, osInfo *map[string]api.OsInfo, diskSizeTier *map[string]api.DiskSizeTierSummary) []byte { - diskComplexityTier := map[string]api.DiskSizeTierSummary{ - "0-10TiB": {VmCount: 125, TotalSizeTB: 8.5}, - } +func createTestInventoryForComplexity( + clusterID string, + osInfo *map[string]api.OsInfo, + diskSizeTier *map[string]api.DiskSizeTierSummary, + diskComplexityTier *map[string]api.DiskSizeTierSummary, +) []byte { inventory := api.Inventory{ Clusters: map[string]api.InventoryData{ clusterID: { Vms: api.VMs{ Total: 10, OsInfo: osInfo, DiskSizeTier: diskSizeTier, - DiskComplexityTier: &diskComplexityTier, + DiskComplexityTier: diskComplexityTier, DiskGB: api.VMResourceBreakdown{Total: 100}, CpuCores: api.VMResourceBreakdown{Total: 40}, RamGB: api.VMResourceBreakdown{Total: 80},Callers can then pass
nilwhen they want to exerciseDiskSizeTierfallback, and pass an explicit map only in the dedicated “preferDiskComplexityTier” test.As per coding guidelines, "Coverage: Strive for high test coverage on critical logic paths."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 33 - 45, The test helper createTestInventoryForComplexity always injects a fixed DiskComplexityTier which causes tests to bypass the DiskSizeTier fallback logic in CalculateMigrationComplexity; change createTestInventoryForComplexity so it only sets Inventory.DiskComplexityTier when the caller provides a non-nil diskComplexityTier argument (i.e., accept an extra parameter or reuse the existing diskSizeTier param semantics), leaving it nil when callers want to exercise the DiskSizeTier-to-DiskComplexityTier fallback, and update tests to pass nil for DiskComplexityTier when they should validate the tier-mapping/fallback behavior and only pass an explicit map in the test that specifically verifies prefer-DiskComplexityTier behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/service/estimation_test.go`:
- Around line 392-422: Replace the single-case legacy-label test with a
table-driven test that iterates all old disk labels and their expected
score/VMCount/TotalSizeTB; for each entry, build a DiskSizeTier using
buildDiskSizeTier, create the inventory via createTestInventoryForComplexity
(then strip DiskComplexityTier by unmarshalling into inv and setting
inv.Clusters[clusterID] with only DiskSizeTier), marshal and store it in
mockStore.assessments, call estimationSrv.CalculateMigrationComplexity(ctx,
assessmentID, clusterID) and assert that result.ComplexityByDisk contains the
expected Score, VMCount and TotalSizeTB for that label; keep the test name and
structure but replace the hardcoded "Easy (0-10TB)" assertions with the loop
over the legacy-label table mapping to expected values.
In `@internal/service/estimation.go`:
- Around line 200-208: The current selection logic for disk tier data
incorrectly treats an empty DiskComplexityTier map as usable and therefore skips
the fallback; update the conditional that assigns diskSource so that it prefers
DiskComplexityTier only when it is non-nil AND non-empty (e.g., check
len(clusterInventory.Vms.DiskComplexityTier) > 0), otherwise fall back to
DiskSizeTier; ensure the final error still triggers when both DiskComplexityTier
and DiskSizeTier are nil/empty. Reference:
clusterInventory.Vms.DiskComplexityTier, clusterInventory.Vms.DiskSizeTier, and
the diskSource variable.
In `@pkg/estimations/complexity/complexity.go`:
- Around line 132-136: The tier-4 comment in
pkg/estimations/complexity/complexity.go is inconsistent with the label and
query logic (which treat tier-4 as >=50TiB); update the comment line that
currently reads ">50TiB" to ">=50TiB" (or "50+TiB") so the textual description
matches the implemented semantics used by the scoring logic and the "50+TiB"
label.
---
Outside diff comments:
In `@internal/service/estimation_test.go`:
- Around line 507-531: The test in estimation_test.go for
CalculateMigrationComplexity currently only asserts the error contains
"diskSizeTier" which is too loose; update the assertion on err to ensure the new
two-field error contract is enforced by checking that the error message contains
both "diskSizeTier" and "diskComplexityTier" (or assert the full expected error
string) after calling estimationSrv.CalculateMigrationComplexity(ctx,
assessmentID, clusterID) so the test fails if either field is missing from the
error.
---
Duplicate comments:
In `@internal/service/estimation_test.go`:
- Around line 33-45: The test helper createTestInventoryForComplexity always
injects a fixed DiskComplexityTier which causes tests to bypass the DiskSizeTier
fallback logic in CalculateMigrationComplexity; change
createTestInventoryForComplexity so it only sets Inventory.DiskComplexityTier
when the caller provides a non-nil diskComplexityTier argument (i.e., accept an
extra parameter or reuse the existing diskSizeTier param semantics), leaving it
nil when callers want to exercise the DiskSizeTier-to-DiskComplexityTier
fallback, and update tests to pass nil for DiskComplexityTier when they should
validate the tier-mapping/fallback behavior and only pass an explicit map in the
test that specifically verifies prefer-DiskComplexityTier behavior.
In `@pkg/inventory/converters/to_api.go`:
- Around line 57-63: The conversion currently always allocates
diskComplexityTiers and thus loses nil/absence semantics; only create and
populate the map when the source v.DiskComplexityTiers is non-nil and otherwise
leave the target field nil. Concretely, guard the block with if
v.DiskComplexityTiers != nil { diskComplexityTiers :=
make(map[string]api.DiskSizeTierSummary); for tier, summary := range
v.DiskComplexityTiers { ... } ; assign the populated map (or its pointer if the
target field is a *map) to the output }, and apply the same nil-guard fix to the
other occurrence referenced (around line 151) so missing source data results in
a nil target rather than an empty map.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: e5a7a5a6-6bdc-4f2a-93a3-11bc05546f34
📒 Files selected for processing (17)
api/v1alpha1/agent/spec.gen.goapi/v1alpha1/openapi.yamlapi/v1alpha1/spec.gen.goapi/v1alpha1/types.gen.gointernal/handlers/v1alpha1/estimation_test.gointernal/service/estimation.gointernal/service/estimation_test.gopkg/duckdb_parser/builder.gopkg/duckdb_parser/inventory_builder.gopkg/duckdb_parser/inventory_builder_test.gopkg/duckdb_parser/queries.gopkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmplpkg/duckdb_parser/templates/disk_size_tier_query.go.tmplpkg/estimations/complexity/complexity.gopkg/estimations/complexity/complexity_test.gopkg/inventory/converters/to_api.gopkg/inventory/model.go
6b97bab to
7c7c9dc
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
internal/service/estimation.go (1)
202-207:⚠️ Potential issue | 🟠 MajorHandle empty
diskComplexityTieras missing data.Line 203 currently falls back only on
nil. IfdiskComplexityTieris non-nil but empty, the code skips a validdiskSizeTierfallback and loses usable tier data.💡 Suggested fix
diskSource := clusterInventory.Vms.DiskComplexityTier - if diskSource == nil { + if diskSource == nil || len(*diskSource) == 0 { diskSource = clusterInventory.Vms.DiskSizeTier } - if diskSource == nil { + if diskSource == nil || len(*diskSource) == 0 { return nil, fmt.Errorf("inventory has no disk tier data (diskComplexityTier or diskSizeTier)") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation.go` around lines 202 - 207, The current fallback only checks for nil and ignores empty DiskComplexityTier; modify the logic around diskSource (currently assigned from clusterInventory.Vms.DiskComplexityTier) to treat an empty collection as missing: if diskSource is nil or has zero length, set diskSource = clusterInventory.Vms.DiskSizeTier, and then if diskSource is still nil or empty return the existing error. Update checks referencing diskSource, DiskComplexityTier and DiskSizeTier accordingly.internal/service/estimation_test.go (2)
392-422: 🧹 Nitpick | 🔵 TrivialExpand legacy fallback to a table-driven test over all old labels.
This compatibility path currently validates only one legacy label (
"Easy (0-10TB)"). Please cover all legacy labels in a table-driven loop to protect the full fallback contract.As per coding guidelines, "Coverage: Strive for high test coverage on critical logic paths." and "Readability: Use table-driven tests for multiple cases."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 392 - 422, The test currently verifies only the "Easy (0-10TB)" legacy DiskSizeTier label; change the spec into a table-driven test that iterates all legacy labels and validates the fallback mapping for each. For each entry: build a per-label oldStyleTier with buildDiskSizeTier (using a distinct VmCount and TotalSizeTB per case), createTestInventoryForComplexity and then marshal/unmarshal into inv while leaving DiskComplexityTier nil, store it via createTestAssessmentFromRawInventory, call estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID) and assert that the corresponding entry in result.ComplexityByDisk has the expected Score, VMCount and TotalSizeTB for that legacy label; reference the legacy labels and expected scores in the table and reuse the existing setup (mockStore, assessmentID, clusterID) so each case runs independently.
33-45:⚠️ Potential issue | 🟠 MajorAvoid hardcoded
DiskComplexityTierin shared fixture.The helper now forces a fixed
DiskComplexityTier, so tests that varydiskSizeTierwon’t exercise real mapping/fallback behavior. This can mask regressions in disk-tier handling.As per coding guidelines, "**Coverage: Strive for high test coverage on critical logic paths.**"Refactor direction
-func createTestInventoryForComplexity(clusterID string, osInfo *map[string]api.OsInfo, diskSizeTier *map[string]api.DiskSizeTierSummary) []byte { - diskComplexityTier := map[string]api.DiskSizeTierSummary{ - "0-10TiB": {VmCount: 125, TotalSizeTB: 8.5}, - } +func createTestInventoryForComplexity( + clusterID string, + osInfo *map[string]api.OsInfo, + diskSizeTier *map[string]api.DiskSizeTierSummary, + diskComplexityTier *map[string]api.DiskSizeTierSummary, +) []byte { ... - DiskComplexityTier: &diskComplexityTier, + DiskComplexityTier: diskComplexityTier,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 33 - 45, createTestInventoryForComplexity currently hardcodes DiskComplexityTier which prevents tests from exercising disk-tier mapping/fallback; change the helper to stop injecting a fixed diskComplexityTier by adding a parameter (e.g., diskComplexityTier *map[string]api.DiskSizeTierSummary) or by setting DiskComplexityTier to nil when no explicit complexity tier is desired, and update the Inventory construction to assign DiskComplexityTier: diskComplexityTier (or nil) instead of the hardcoded map so tests that vary diskSizeTier can trigger real mapping/fallback behavior.pkg/estimations/complexity/complexity.go (1)
132-135:⚠️ Potential issue | 🟡 MinorAlign tier-4 comment with implemented label semantics.
Line 135 says
>50TiB, but the active label is50+TiB(effectively>=50TiB). Please make the wording consistent to avoid ambiguity.Proposed fix
-// Score 4 — >50TiB +// Score 4 — 50+TiB (>=50TiB)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/estimations/complexity/complexity.go` around lines 132 - 135, Update the Score 4 comment so its wording matches the implemented label semantics (50+TiB / >=50TiB). Specifically change the comment on the "Score 4" line (the line that currently reads ">50TiB") to use ">=50TiB" or "50+TiB" so it is unambiguous and consistent with the active label semantics in complexity.go.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/duckdb_parser/inventory_builder_test.go`:
- Around line 940-991: The test TestBuildInventory_DiskSizeTiers only checks one
bucket; add table-driven boundary cases to catch off-by-one tiering by creating
disk entries with Capacity MiB at and just below/above cut points (100GiB,
500GiB, 1TiB, 2TiB, 5TiB, 10TiB, 20TiB) and expected DiskSizeTiers and
DiskComplexityTiers labels; for each case build a tmp Excel (reuse
NewExcelSheet/vDisk headers and parser.IngestRvTools + parser.BuildInventory),
then assert inv.VCenter.VMs.DiskSizeTiers and
inv.VCenter.VMs.DiskComplexityTiers contain the expected bucket keys and VM
counts—loop the table, generate Capacity MiB values (exact boundary, boundary-1,
boundary+1), and check classification to detect off-by-one regressions.
In `@pkg/estimations/complexity/complexity_test.go`:
- Around line 201-225: Tests for ScoreDiskTierLabel are duplicated; consolidate
them by keeping a single Describe("ScoreDiskTierLabel", ...) block and move all
cases into one DescribeTable (or a single DescribeTable plus the existing It for
the unknown-case) so both TiB labels and legacy labels are covered in the same
table. Update the table function signature to accept (label string,
expectedScore int) and include all Entry(...) rows from both existing tables,
then remove the duplicate Describe block to avoid two separate suites for the
same function.
In `@pkg/inventory/converters/to_api_test.go`:
- Around line 361-411: Add parallelization to this table-driven test by calling
t.Parallel() at the start of TestToAPIVMs_DiskComplexityTier (and inside each
t.Run subtest) so each case runs independently and concurrently; ensure there is
no shared mutable state used by the test and reference the test function
TestToAPIVMs_DiskComplexityTier and the converter toAPIVMs when making the
change.
---
Duplicate comments:
In `@internal/service/estimation_test.go`:
- Around line 392-422: The test currently verifies only the "Easy (0-10TB)"
legacy DiskSizeTier label; change the spec into a table-driven test that
iterates all legacy labels and validates the fallback mapping for each. For each
entry: build a per-label oldStyleTier with buildDiskSizeTier (using a distinct
VmCount and TotalSizeTB per case), createTestInventoryForComplexity and then
marshal/unmarshal into inv while leaving DiskComplexityTier nil, store it via
createTestAssessmentFromRawInventory, call
estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID) and
assert that the corresponding entry in result.ComplexityByDisk has the expected
Score, VMCount and TotalSizeTB for that legacy label; reference the legacy
labels and expected scores in the table and reuse the existing setup (mockStore,
assessmentID, clusterID) so each case runs independently.
- Around line 33-45: createTestInventoryForComplexity currently hardcodes
DiskComplexityTier which prevents tests from exercising disk-tier
mapping/fallback; change the helper to stop injecting a fixed diskComplexityTier
by adding a parameter (e.g., diskComplexityTier
*map[string]api.DiskSizeTierSummary) or by setting DiskComplexityTier to nil
when no explicit complexity tier is desired, and update the Inventory
construction to assign DiskComplexityTier: diskComplexityTier (or nil) instead
of the hardcoded map so tests that vary diskSizeTier can trigger real
mapping/fallback behavior.
In `@internal/service/estimation.go`:
- Around line 202-207: The current fallback only checks for nil and ignores
empty DiskComplexityTier; modify the logic around diskSource (currently assigned
from clusterInventory.Vms.DiskComplexityTier) to treat an empty collection as
missing: if diskSource is nil or has zero length, set diskSource =
clusterInventory.Vms.DiskSizeTier, and then if diskSource is still nil or empty
return the existing error. Update checks referencing diskSource,
DiskComplexityTier and DiskSizeTier accordingly.
In `@pkg/estimations/complexity/complexity.go`:
- Around line 132-135: Update the Score 4 comment so its wording matches the
implemented label semantics (50+TiB / >=50TiB). Specifically change the comment
on the "Score 4" line (the line that currently reads ">50TiB") to use ">=50TiB"
or "50+TiB" so it is unambiguous and consistent with the active label semantics
in complexity.go.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 51ba49ad-6f49-45b1-b3e8-2042ae56a24c
📒 Files selected for processing (18)
api/v1alpha1/agent/spec.gen.goapi/v1alpha1/openapi.yamlapi/v1alpha1/spec.gen.goapi/v1alpha1/types.gen.gointernal/handlers/v1alpha1/estimation_test.gointernal/service/estimation.gointernal/service/estimation_test.gopkg/duckdb_parser/builder.gopkg/duckdb_parser/inventory_builder.gopkg/duckdb_parser/inventory_builder_test.gopkg/duckdb_parser/queries.gopkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmplpkg/duckdb_parser/templates/disk_size_tier_query.go.tmplpkg/estimations/complexity/complexity.gopkg/estimations/complexity/complexity_test.gopkg/inventory/converters/to_api.gopkg/inventory/converters/to_api_test.gopkg/inventory/model.go
7c7c9dc to
aac1a83
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/service/estimation_test.go (1)
534-562: 🧹 Nitpick | 🔵 TrivialAdd a success-path fallback case for empty
DiskComplexityTier.You cover
(nil,nil)and(empty,empty)error paths, but not the success path whereDiskComplexityTieris an empty map andDiskSizeTieris populated. That branch is explicitly handled in service code (len(*diskSource) == 0fallback) and should be asserted.As per coding guidelines, "Coverage: Strive for high test coverage on critical logic paths."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 534 - 562, Add a new test case next to the existing ones that constructs an inventory where DiskComplexityTier is an empty map (e.g., &emptyTier) but DiskSizeTier is populated with valid entries, marshal it into data and insert into mockStore.assessments via createTestAssessmentFromRawInventory, then call estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID) and assert it returns a non-nil result and nil error (and optionally assert expected complexity fields), verifying the service’s len(*diskSource) == 0 fallback path for DiskComplexityTier works when DiskSizeTier is present.
♻️ Duplicate comments (4)
pkg/inventory/converters/to_api_test.go (1)
361-411: 🧹 Nitpick | 🔵 TrivialConsider parallelizing this independent table-driven test.
This test appears isolated and can run in parallel with subtests.
Suggested diff
func TestToAPIVMs_DiskComplexityTier(t *testing.T) { + t.Parallel() tests := []struct { @@ for _, tt := range tests { + tt := tt t.Run(tt.name, func(t *testing.T) { + t.Parallel() result := toAPIVMs(&tt.input)As per coding guidelines: "Parallelism: ensure tests are independent as much as possible."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/inventory/converters/to_api_test.go` around lines 361 - 411, The table-driven test TestToAPIVMs_DiskComplexityTier can run its subtests in parallel; update the test to call t.Parallel() for the parent test and inside each t.Run subtest (or at minimum inside each subtest) so each case executes concurrently and remains independent; locate the TestToAPIVMs_DiskComplexityTier function and add t.Parallel() near the top of the test and inside the anonymous func passed to t.Run to enable parallel execution of subtests that call toAPIVMs and assert on result.DiskComplexityTier.internal/service/estimation_test.go (2)
33-45:⚠️ Potential issue | 🟠 MajorShared fixture hardcodes
DiskComplexityTierand masks the behavior under test.Line 33 helper always injects
DiskComplexityTier={"0-10TiB":...}, so tests that pass varyingdiskSizeTierinputs do not actually validate disk-tier mapping/fallback paths (e.g., the case at Line 261).As per coding guidelines, "**Coverage: Strive for high test coverage on critical logic paths.**"Proposed refactor
-func createTestInventoryForComplexity(clusterID string, osInfo *map[string]api.OsInfo, diskSizeTier *map[string]api.DiskSizeTierSummary) []byte { - diskComplexityTier := map[string]api.DiskSizeTierSummary{ - "0-10TiB": {VmCount: 125, TotalSizeTB: 8.5}, - } +func createTestInventoryForComplexity( + clusterID string, + osInfo *map[string]api.OsInfo, + diskSizeTier *map[string]api.DiskSizeTierSummary, + diskComplexityTier *map[string]api.DiskSizeTierSummary, +) []byte { inventory := api.Inventory{ Clusters: map[string]api.InventoryData{ clusterID: { Vms: api.VMs{ Total: 10, OsInfo: osInfo, DiskSizeTier: diskSizeTier, - DiskComplexityTier: &diskComplexityTier, + DiskComplexityTier: diskComplexityTier,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 33 - 45, The helper createTestInventoryForComplexity currently always injects a hardcoded diskComplexityTier which masks test variations; change it to stop setting DiskComplexityTier to the fixed map and instead derive it from the provided diskSizeTier parameter (or set DiskComplexityTier to nil when diskSizeTier is nil) so tests that vary diskSizeTier or expect fallback mapping exercise the real logic in functions that consume DiskComplexityTier and DiskSizeTier (refer to createTestInventoryForComplexity, DiskComplexityTier, and DiskSizeTier in tests).
392-422: 🛠️ Refactor suggestion | 🟠 MajorExpand the backward-compat fallback test to all legacy labels.
This test validates only
"Easy (0-10TB)". Regressions in"Medium (10-20TB)","Hard (20-50TB)", or"White Glove (>50TB)"would still pass.As per coding guidelines, "Coverage: Strive for high test coverage on critical logic paths." and "Readability: Use table-driven tests for multiple cases."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/service/estimation_test.go` around lines 392 - 422, The test only asserts the fallback for the "Easy (0-10TB)" legacy label; extend it to iterate over all legacy DiskSizeTier labels ("Easy (0-10TB)", "Medium (10-20TB)", "Hard (20-50TB)", "White Glove (>50TB)") using a table-driven subtest so each label is validated: for each label buildDiskSizeTier with that label and corresponding api.DiskSizeTierSummary, marshal the inventory (using createTestInventoryForComplexity and mockStore.assessments as done), call estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID), and assert that result.ComplexityByDisk contains the expected Score, VMCount and TotalSizeTB for that label (reference result.ComplexityByDisk, buildDiskSizeTier, createTestInventoryForComplexity, and CalculateMigrationComplexity to locate code).pkg/estimations/complexity/complexity.go (1)
132-136:⚠️ Potential issue | 🟡 MinorAlign tier-4 doc boundary with
50+TiBsemantics.Line 135 says
>50TiB, but the exported label is50+TiBand the implemented bucket semantics are>=50TiB. Please update the comment to remove boundary ambiguity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/estimations/complexity/complexity.go` around lines 132 - 136, Summary: The comment for Score 4 is ambiguous (" >50TiB") versus the exported label "50+TiB" and implemented bucket semantics (>=50TiB); update the comment to match the implementation. Modify the block comment in complexity.go that documents the scoring tiers so that the Score 4 line reads "Score 4 — >=50TiB" (or "Score 4 — 50+TiB (>=50TiB)") to remove ambiguity and align with the exported label and bucket logic used elsewhere.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmpl`:
- Around line 21-24: The WHEN clauses in the disk complexity tier SQL template
use strict "<" comparisons so exact boundary values (10, 20, 50) fall into the
next bucket; update the conditions that reference total_disk_tb in the template
(the WHEN lines) to use "<=" for the 10, 20 and 50 cutoffs so values equal to
10, 20 or 50 are classified into '0-10TiB', '10-20TiB', and '20-50TiB'
respectively.
---
Outside diff comments:
In `@internal/service/estimation_test.go`:
- Around line 534-562: Add a new test case next to the existing ones that
constructs an inventory where DiskComplexityTier is an empty map (e.g.,
&emptyTier) but DiskSizeTier is populated with valid entries, marshal it into
data and insert into mockStore.assessments via
createTestAssessmentFromRawInventory, then call
estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID) and
assert it returns a non-nil result and nil error (and optionally assert expected
complexity fields), verifying the service’s len(*diskSource) == 0 fallback path
for DiskComplexityTier works when DiskSizeTier is present.
---
Duplicate comments:
In `@internal/service/estimation_test.go`:
- Around line 33-45: The helper createTestInventoryForComplexity currently
always injects a hardcoded diskComplexityTier which masks test variations;
change it to stop setting DiskComplexityTier to the fixed map and instead derive
it from the provided diskSizeTier parameter (or set DiskComplexityTier to nil
when diskSizeTier is nil) so tests that vary diskSizeTier or expect fallback
mapping exercise the real logic in functions that consume DiskComplexityTier and
DiskSizeTier (refer to createTestInventoryForComplexity, DiskComplexityTier, and
DiskSizeTier in tests).
- Around line 392-422: The test only asserts the fallback for the "Easy
(0-10TB)" legacy label; extend it to iterate over all legacy DiskSizeTier labels
("Easy (0-10TB)", "Medium (10-20TB)", "Hard (20-50TB)", "White Glove (>50TB)")
using a table-driven subtest so each label is validated: for each label
buildDiskSizeTier with that label and corresponding api.DiskSizeTierSummary,
marshal the inventory (using createTestInventoryForComplexity and
mockStore.assessments as done), call
estimationSrv.CalculateMigrationComplexity(ctx, assessmentID, clusterID), and
assert that result.ComplexityByDisk contains the expected Score, VMCount and
TotalSizeTB for that label (reference result.ComplexityByDisk,
buildDiskSizeTier, createTestInventoryForComplexity, and
CalculateMigrationComplexity to locate code).
In `@pkg/estimations/complexity/complexity.go`:
- Around line 132-136: Summary: The comment for Score 4 is ambiguous (" >50TiB")
versus the exported label "50+TiB" and implemented bucket semantics (>=50TiB);
update the comment to match the implementation. Modify the block comment in
complexity.go that documents the scoring tiers so that the Score 4 line reads
"Score 4 — >=50TiB" (or "Score 4 — 50+TiB (>=50TiB)") to remove ambiguity and
align with the exported label and bucket logic used elsewhere.
In `@pkg/inventory/converters/to_api_test.go`:
- Around line 361-411: The table-driven test TestToAPIVMs_DiskComplexityTier can
run its subtests in parallel; update the test to call t.Parallel() for the
parent test and inside each t.Run subtest (or at minimum inside each subtest) so
each case executes concurrently and remains independent; locate the
TestToAPIVMs_DiskComplexityTier function and add t.Parallel() near the top of
the test and inside the anonymous func passed to t.Run to enable parallel
execution of subtests that call toAPIVMs and assert on
result.DiskComplexityTier.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ebf25f02-a046-4ee4-aa59-b1611008ad4d
📒 Files selected for processing (18)
api/v1alpha1/agent/spec.gen.goapi/v1alpha1/openapi.yamlapi/v1alpha1/spec.gen.goapi/v1alpha1/types.gen.gointernal/handlers/v1alpha1/estimation_test.gointernal/service/estimation.gointernal/service/estimation_test.gopkg/duckdb_parser/builder.gopkg/duckdb_parser/inventory_builder.gopkg/duckdb_parser/inventory_builder_test.gopkg/duckdb_parser/queries.gopkg/duckdb_parser/templates/disk_complexity_tier_query.go.tmplpkg/duckdb_parser/templates/disk_size_tier_query.go.tmplpkg/estimations/complexity/complexity.gopkg/estimations/complexity/complexity_test.gopkg/inventory/converters/to_api.gopkg/inventory/converters/to_api_test.gopkg/inventory/model.go
aac1a83 to
936be5d
Compare
| diskComplexityTier: | ||
| type: object |
There was a problem hiding this comment.
why not using the complexityDistribution?
What I'm missing here?
There was a problem hiding this comment.
Not sure I understand the question :-)
Under the hood both objects are implemented as "diskSizeTierSummary", but they each collect different data.
Complexity distribution is the aggraegation of the disk and os complexity (and I have no choice but to collect it in the inventory, because we aggregate the OS and Disk datums.)
DiskComplexityTier is a duplication of DiskSizeTier, but with different granularity. It classifies the VMs just according to their disk size. Up to this change I could have used the DiskSizeTier, but now the tiers used here and by the complexity drifted - so I had to add this.
|
/hold |
|
/remove-hold |
This feature changes the disk size tiers stored in the inventory, and decouples the complexity heuristics from those collected disk size tiers. - Disk size tiers change From: Easy (0-10TB) | Medium (10-20TB) | Hard (20-50TB) | White Glove (>50TB) To: 0-100GiB | 100-500GiB | 500GiB-1TiB | 1-2TiB | 2-5TiB | 5-10TiB | 10-20TiB | 20+TiB - Added DiskComplexityTier object to the inventory, with the tiers defined by the Jupyter notebook Signed-off-by: Ami Malimovka <amalimov@redhat.com> # Conflicts: # api/v1alpha1/spec.gen.go
936be5d to
9cdb3a4
Compare
This feature changes the disk size tiers stored in the inventory, and decouples the complexity heuristics from those collected disk size tiers.
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests