-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathAgent_Testing_Protocol_Aug25.txt
More file actions
598 lines (301 loc) · 8.93 KB
/
Agent_Testing_Protocol_Aug25.txt
File metadata and controls
598 lines (301 loc) · 8.93 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
SprintIQ Testing Report (Aug 17, 2025 – Senior
Product Manager Perspective)
Objective
As a Senior Product Manager at a B2B SaaS company, the goal of this session was to evaluate SprintIQ’s
ability to provide insights into current Jira projects. The platform advertises automated dashboards,
AI‑driven agents, document analysis and comparison tools and AI‑delegated tasks. I tested each of these
functions by creating dashboards, exploring available agents, uploading and comparing product
requirement documents and checking how tasks were created. The report below summarizes each test and
notes successes, issues and opportunities for improvement.
Summary of Tests and Outcomes
Test area
What was done
Outcome
Navigated to
Issues / areas for
improvement
Locating the
widget sidebar was
not obvious—
button is small and
hidden on the left.
Custom
dashboard
creation
Dashboards & Analytics →
clicked Create New Dashboard.
Dashboard builder was
Widgets state that
intuitive once the widgets
they can be
Named the dashboard
panel was found. Widgets
dragged, but
“Jira Project Overview” and
displayed live metrics
dragging actually
described it as “Track Jira issues,
(e.g., issue resolution time
opens a details
team velocity, release metrics.”
2.8 days, team velocity
view; repositioning
Entered the builder and used the
229 points). Dashboard
widgets is not
widget sidebar to add Issue
saved correctly and
supported. Several
Resolution Time, Team Velocity
appeared in list with
widgets (e.g.,
and Sprint Progress. Saved the
ability to edit.
Sprint Progress)
layout.
show “no data
available” until
integrations
(GitHub/Linear/Jira)
are connected.
1
Test area
What was done
Outcome
Examined categories under
Agent HQ
(Product Management,
Communication, Engineering,
Analytics). Attempted to open
agents such as
Pre‑built
agents
in Agent HQ
PRD Change Tracker,
Sprint Health Monitor,
JIRA Blocker Detective and
GitHub Activity Analyzer by
clicking their cards.
Document
analysis
(upload &
scan)
Visited PRD Analysis →
Document Upload & Analysis.
Created two sample PRD text
files ( sample_prd.txt and
sample_prd_v2.txt ) with
typical onboarding requirements.
Uploaded both files using the
upload area. After upload, clicked
AI‑Powered Scan on the file
cards.
Issues / areas for
improvement
Without
integrations to Jira,
GitHub or Slack,
Could not open or run any
the agents may not
agent. Hovering over
run. However,
cards shows a small
there is no error
arrow, but clicking does
message or
nothing. The right‑hand
guidance on
panel where agent results
connecting them. A
should appear remains
trial user should at
blank.
The upload process
least be able to
explore agent
settings or see
example output.
Document analysis
appears shallow. It
did not summarise
worked and the platform
requirements,
displayed word count and
highlight
a “confidence” score
acceptance criteria
(~85%). However, the
or identify
AI‑Powered Scan only
potential work
returned a generic
items. More
statement (“Document
detailed extraction
successfully processed”)
(e.g., listing key
with no extracted
user stories or
requirements or insights.
generating tasks)
The system allowed
selection of two
documents and displayed
After uploading two versions of
a message “Selected for
the PRD, used Document
comparison,” but pressing
Document
Comparison. Selected the two
Compare Documents did
comparison
files in compare mode and
not display any results.
(Smart Diff)
clicked Compare Documents.
The
Also tried switching to the
Document Comparison
Document Comparison tab.
tab contains only static
text about features and
best practices—no way to
see a diff.
2
would make this
feature valuable.
The comparison
tool seems
non‑functional.
There is no
feedback when
clicking compare.
Users need a
side‑by‑side view
showing added/
removed
requirements and
AI commentary
linking changes to
Jira tasks.
Test area
What was done
Outcome
AI‑delegated
tasks &
team view
Went to Team page
Even after waiting several
(Sprint Execution tab). Selected
minutes, no tasks were
the new Jira Project Overview
generated and the
dashboard in the
indicator continued
AI‑Delegated Tasks section. The
spinning. Sprint summary
system shows “3 widgets
panels (active, planned,
available for analysis” and
completed sprints) all
“Generating AI insights…”
show zero.
Overall
navigation
and UI
Tested menu navigation
(Agent HQ, Dashboards,
PRD Analysis, Team,
OKR Alignment, Integrations).
Navigation is clear with a
side bar and top header.
Each section loads quickly.
Issues / areas for
improvement
Without
integrations to Jira
and calendar, no
actionable tasks
are created. The UI
should inform
users that tasks
require connecting
Jira/
Google Calendar,
rather than just
spinning
indefinitely.
Many features are
blocked or show
placeholders when
integrations are
not connected. The
application should
clearly indicate
prerequisites and
possibly provide a
sandbox with
demo data so that
new users can
evaluate
capabilities.
General Observations
•
Integrations are essential. Many widgets and agents depend on external services (Jira, GitHub,
Slack, Linear). Without connection, large portions of the product show no data or cannot be
activated. The trial version should either provide sample integrations or at least explain that nothing
will happen until the user connects tools.
•
Document analysis is basic. For uploaded PRD files, the AI did not produce actionable insights. It
neither extracted key requirements nor generated any suggested Jira tasks or changes. This makes it
difficult to justify using the feature for real PRD monitoring.
•
Agent HQ UI is confusing. Pre‑built agents look promising (e.g., JIRA Blocker Detective promises to
identify blocked tickets), but none of the cards can be opened. It is unclear if this is because of the
trial tier or a UI bug. There should be a clear call to action or a disabled state explaining why they
cannot be run.
3
•
Task generation spinner never completes. On the Team page, selecting a dashboard to analyze
results in a perpetual “Generating AI insights…” spinner with zero tasks created. This suggests either
a backend issue or a missing integration. Again, a message explaining the dependency on Jira or
Google Calendar would help.
•
Missing features for a product manager. To monitor Jira projects effectively, one would expect the
platform to:
• extract user stories and acceptance criteria from PRDs and automatically map them to Jira tickets;
• compare PRD versions and highlight changes with impact analysis;
• run agents to detect sprint blockers, team velocity issues and upcoming deadlines; and
• create tasks or notifications for the team based on these insights.
None of these functions were observed during testing.
Recommendations
1.
Provide demo data or sandbox integrations. For first‑time users, having a sample Jira project and
GitHub repo connected would allow them to see dashboards and agents in action before configuring
real integrations.
2.
Clarify integration requirements. The UI should display warnings or tooltips on each feature that
depends on an integration (e.g., “Connect Jira to use this widget”) instead of silently showing blank
widgets or endless spinners.
3.
Enhance document analysis. The AI should parse uploaded PRDs to extract epics, user stories and
acceptance criteria and suggest linking them to existing Jira issues or generating new ones. Results
should be visible immediately in the analysis card.
4.
Fix or improve document comparison. The Smart Diff feature should produce a side‑by‑side diff of
two documents with added/removed/modified lines and an AI summary describing the impact.
5.
Enable agent interactions. Users need the ability to open and configure pre‑built agents. If a
particular agent requires an integration, show a prompt to connect rather than disabling the entire
agent UI.
6.
Improve AI‑delegated task generation. Provide clarity on what triggers task creation and ensure
that tasks appear after analysis. If tasks cannot be generated due to missing data, display a message
instead of a perpetual spinner.
Conclusion
SprintIQ has potential to streamline product management by combining AI‑powered analysis, dashboards
and agent‑driven tasks. However, the current trial experience lacks the depth and interactivity needed for a
Senior Product Manager to evaluate its effectiveness. The biggest roadblocks are unavailable integrations,
minimal PRD analysis and non‑functional agents. Addressing these issues would make it easier for new
users to see the value and adopt SprintIQ in their workflows.
4