Skip to content

refactor(internals): iterative mountAtom (scale)#3281

Open
dmaskasky wants to merge 2 commits intopmndrs:mainfrom
dmaskasky:iterative-mount-experiment
Open

refactor(internals): iterative mountAtom (scale)#3281
dmaskasky wants to merge 2 commits intopmndrs:mainfrom
dmaskasky:iterative-mount-experiment

Conversation

@dmaskasky
Copy link
Copy Markdown
Collaborator

Summary

Refactors mountAtom to use an iterative solution.

Check List

  • pnpm run fix for formatting and linting code and docs

@vercel
Copy link
Copy Markdown

vercel bot commented Mar 30, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
jotai Ready Ready Preview, Comment Apr 2, 2026 0:39am

Request Review

@codesandbox-ci
Copy link
Copy Markdown

codesandbox-ci bot commented Mar 30, 2026

This pull request is automatically built and testable in CodeSandbox.

To see build info of the built libraries, click here or the icon next to each commit SHA.

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 30, 2026

More templates

npm i https://pkg.pr.new/jotai@3281

commit: d3b2c48

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 30, 2026

LiveCodes Preview in LiveCodes

Latest commit: d3b2c48
Last updated: Apr 2, 2026 12:39am (UTC)

Playground Link
React demo https://livecodes.io?x=id/3BDGXU74B

See documentations for usage instructions.

@dmaskasky dmaskasky changed the title Iterative mount experiment Iterative mountAtom Mar 30, 2026
@dmaskasky dmaskasky changed the title Iterative mountAtom iterative mountAtom Mar 30, 2026
Copy link
Copy Markdown
Member

@dai-shi dai-shi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think you can create a failing test case with recursive mountAtom?

Comment on lines +825 to +826
const atomStack: AnyAtom[] = [atom]
const stateStack: (AtomState | undefined)[] = [undefined]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried const stack: [AnyAtom, AtomState][] and compare the performance?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, its worse because of the added overhead of creating new arrays on every iteration.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, really... That means push/pop are more lightweight, correct?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parallel arrays are faster because they avoid allocating a tiny tuple array for every entry.

With tuple arrays, each push does all of this:

  • allocate a new [a, aState] array
  • write its two elements
  • push a reference to that tuple into the outer stack

Then each pop does this:

  • pop the tuple reference
  • dereference that tuple object
  • read element 0 and element 1
  • later let GC reclaim that tuple

With parallel arrays, each entry is just:

  • push a into aStack
  • push aState into stateStack

and later:

  • pop from aStack
  • pop from stateStack

So parallel arrays usually win because they have:

  • no per-entry object allocation
  • less garbage collection
  • less pointer indirection
  • better memory locality

The extra push/pop calls are usually cheaper than creating and later collecting a fresh tuple object for every item.

@dai-shi dai-shi added this to the v2.19.1 milestone Apr 2, 2026
@dmaskasky dmaskasky changed the title iterative mountAtom refactor(internals): iterative mountAtom (scale) Apr 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants