Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
|
this crate might need a better name, but it is the standard interface between tracing and provers. We might want to generalize this to support recursion "tracing" too |
d2289fa to
97b7260
Compare
97b7260 to
d42bade
Compare
cd0c688 to
d26ecff
Compare
|
Warning This PR has more than 500 changed lines and does not include a spec. Large features and architectural changes benefit from a spec-driven workflow. If this PR is a bug fix, refactor, or doesn't warrant a spec, feel free to ignore this message. |
d26ecff to
7047bbb
Compare
Benchmark comparison (crates) |
| pub fn build_with_features(&mut self, target_dir: &str, extra_features: &[&str]) { | ||
| if self.elf.is_some() { | ||
| return; | ||
| } |
There was a problem hiding this comment.
The early return logic is incorrect and prevents building the compute_advice variant after building the regular variant. The code only checks self.elf.is_some() but doesn't account for which variant is being built.
What breaks: If a user first calls build() (which sets self.elf), then later calls build_with_features(target_dir, &["compute_advice"]), the method returns early without building the compute_advice variant, even though self.elf_compute_advice is still None.
Fix:
let is_compute_advice = extra_features.contains(&"compute_advice");
if is_compute_advice {
if self.elf_compute_advice.is_some() {
return;
}
} else {
if self.elf.is_some() {
return;
}
}Move the is_compute_advice determination before the early return check, and check the appropriate field based on which variant is being built.
| pub fn build_with_features(&mut self, target_dir: &str, extra_features: &[&str]) { | |
| if self.elf.is_some() { | |
| return; | |
| } | |
| pub fn build_with_features(&mut self, target_dir: &str, extra_features: &[&str]) { | |
| let is_compute_advice = extra_features.contains(&"compute_advice"); | |
| if is_compute_advice { | |
| if self.elf_compute_advice.is_some() { | |
| return; | |
| } | |
| } else { | |
| if self.elf.is_some() { | |
| return; | |
| } | |
| } | |
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
|
🔍 Starting automated code review process. I'll analyze the changes for semantic consistency, bugs, tech debt, and security concerns. Results will be posted shortly. Generated by Claude Code |
0xAndoroid
left a comment
There was a problem hiding this comment.
Code Review: jolt-trace crate extraction
Good architectural direction — extracting the host-side tracing into a standalone crate with the CycleRow trait abstraction cleanly decouples the tracer from the proving system. The Program builder, flag dispatch tables, and test coverage are solid.
Issues found
Bugs (2):
-
lookup_output()returns incorrect values for branch instructions (cycle_row_impl.rs:145-155). The doc comment promises comparison results (0 or 1) for branches, but the implementation returns 0. In jolt-core, branches like BEQ/BGE compute actual comparisons viaLookupQuery::to_lookup_output(). This will produce incorrect witnesses whenCycleRowreplaces the existing path. -
build_with_features()early-return skips compute_advice builds (program.rs:137). Also flagged by Graphite — theself.elf.is_some()check doesn't distinguish between regular and compute_advice variants.
Missing data (1):
decode()dropse_entry(program.rs:384-395). jolt-core returns 4 values including the ELF entry point, which is required byJoltSharedPreprocessing::new(). The new function returns only 3, silently discardinge_entryfromtracer::decode().
Minor observations
_ => unreachable!()in flag dispatch is correct for the current instruction set (tracer expands others) but fragile. A comment or explicitINLINE(_)match would help.- Dropping
_advice_tapeintrace()matches jolt-core's existing pattern — consistent. trace_to_filetaking&Pathinstead of&PathBufis a nice improvement over jolt-core.- Removing the unused
F: JoltFieldgeneric fromtrace_analyzeis correct.
Generated by Claude Code
| fn lookup_output(&self) -> u64 { | ||
| let cflags = self.circuit_flags(); | ||
| if cflags[CircuitFlags::WriteLookupOutputToRD] { | ||
| self.rd_write().map_or(0, |(_, _, post)| post) | ||
| } else if cflags[CircuitFlags::Jump] { | ||
| self.rd_write().map_or(0, |(_, _, post)| post) | ||
| } else { | ||
| 0 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Bug: lookup_output() implementation doesn't match its doc comment contract.
The doc says "For branches: the comparison result (0 or 1)" but the implementation returns 0 for all branch instructions (BEQ, BGE, BGEU, BLT, BLTU, BNE). Branch instructions don't set WriteLookupOutputToRD or Jump flags, so they fall through to the else { 0 } arm.
In jolt-core, to_lookup_output() for branches actually computes the comparison. For example, BGE computes ((x as i64) >= (y as i64)) as u64 (see jolt-core/src/zkvm/instruction/bge.rs:53-62). This value is used in the R1CS witness as should_branch_lookup_output (see jolt-core/src/zkvm/r1cs/inputs.rs:503).
Since CycleRow is not consumed by jolt-core yet, this is latent — but it will cause incorrect witness generation when this trait replaces the existing LookupQuery::to_lookup_output() path. You'd need to either:
- Evaluate the lookup table here (requires access to the table materializer), or
- Re-derive the comparison from operands using the instruction flags (similar to how
lookup_indexderives its value from flags).
Generated by Claude Code
| Instruction::VirtualXORROTW8(_) => Flags::circuit_flags(&xor_rotate::VirtualXorRotW8), | ||
| Instruction::VirtualXORROTW7(_) => Flags::circuit_flags(&xor_rotate::VirtualXorRotW7), | ||
| Instruction::NoOp => CircuitFlagSet::default().set(CircuitFlags::DoNotUpdateUnexpandedPC), | ||
| _ => unreachable!(), |
There was a problem hiding this comment.
Nit: The instruction coverage here matches jolt-core's define_rv32im_trait_impls! set exactly, which is correct — the tracer expands unsupported base instructions (SLL, LB, LW, atomics, etc.) into virtual sequences during emulation.
However, unreachable!() is fragile: if a new un-expanded instruction is added to the tracer, this panics at runtime with no compile-time warning. Consider either:
- Adding a comment explaining why this is safe (the tracer expands all other instructions), or
- Matching
Instruction::INLINE(_)explicitly (likelookup_table_kind_for_instructiondoes at line 472) instead of relying on_.
Generated by Claude Code
| pub fn decode(elf: &[u8]) -> (Vec<Instruction>, Vec<(u64, u8)>, u64) { | ||
| let (mut instructions, raw_bytes, program_end, _, xlen) = tracer::decode(elf); | ||
| let program_size = program_end - RAM_START_ADDRESS; | ||
| let allocator = VirtualRegisterAllocator::default(); | ||
|
|
||
| instructions = instructions | ||
| .into_iter() | ||
| .flat_map(|instr: Instruction| instr.inline_sequence(&allocator, xlen)) | ||
| .collect(); | ||
|
|
||
| (instructions, raw_bytes, program_size) | ||
| } |
There was a problem hiding this comment.
Missing return value: jolt-core's guest::program::decode() returns 4 values (Vec<Instruction>, Vec<(u64, u8)>, u64, u64) — the 4th being e_entry (ELF entry point address). This function only returns 3.
e_entry is required by JoltSharedPreprocessing::new() and is used extensively in the prover pipeline (see every call site in jolt-core/src/zkvm/prover.rs). When jolt-trace eventually replaces the host decode path, callers won't have access to e_entry.
The underlying tracer::decode() does return e_entry — it's just being discarded on line 385. Consider returning it as the 4th tuple element to match jolt-core's interface.
Generated by Claude Code
| pub fn build_with_features(&mut self, target_dir: &str, extra_features: &[&str]) { | ||
| if self.elf.is_some() { | ||
| return; | ||
| } |
There was a problem hiding this comment.
Bug (also flagged by Graphite): The early-return check on line 137 only checks self.elf.is_some() but doesn't account for the compute_advice variant. If build() was already called (setting self.elf), a subsequent build_with_features(dir, &["compute_advice"]) silently returns without building the compute-advice ELF.
The is_compute_advice check should be moved before the early return:
let is_compute_advice = extra_features.contains(&"compute_advice");
if is_compute_advice {
if self.elf_compute_advice.is_some() { return; }
} else {
if self.elf.is_some() { return; }
}Generated by Claude Code
| fn rs1_read(&self) -> Option<(u8, u64)> { | ||
| self.rs1_read() | ||
| } | ||
|
|
There was a problem hiding this comment.
Nit: This calls self.rs1_read() which for a Cycle ends up calling Cycle::rs1_read() (the inherent method), not CycleRow::rs1_read(). This works because the inherent method has the same signature, but it's a subtle dispatch — the trait method delegates to the inherent method of the same name. Might be worth a comment to clarify this is intentional (or use Cycle::rs1_read(self) explicitly to make the delegation visible).
Generated by Claude Code
- Adapt to jolt-riscv + jolt-lookup-tables (replacing jolt-instructions) - CycleRow trait returns CircuitFlagSet/InstructionFlagSet (packed bitfields) - with_isa_struct! macro uses jolt_riscv::rv/virt modules - lookup_table() via InstructionLookupTable extension trait - Add bytecode.rs, ram.rs, tracer_cycle.rs (renamed from cycle_row_impl) - Remove stale PLAN.md, README.md, REVIEW.md - All 26 tests pass, clippy clean
Rename crate directory and package name to match PR convention.
2401151 to
580e6ea
Compare

Summary
Changes
Testing
cargo clippyandcargo fmtpassSecurity Considerations
Breaking Changes
None