Skip to content

keep-starknet-strange/starknet-skills

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

120 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
name starknet-skills
description Cairo/Starknet skills for AI coding agents with router and module links.

starknet-skills

starknet-skills hero

quality gate modules audits findings snapshot smoke

Claude Code Cursor Codex GitHub Copilot Gemini CLI Junie VS Code Agent Skills

All 30+ compatible tools

Built on the Agent Skills open standard — works with any tool that reads markdown.
Roo Code Goose OpenHands Amp Mistral Vibe TRAE Firebender Factory Databricks Spring AI OpenCode Qodo Letta Snowflake Mux Piebald Laravel Boost Emdash VT Code Agentman Autohand Command Code Ona

Cairo/Starknet skills for AI coding agents

Security + reasoning knowledge layer for any agent that reads markdown. Built on the Agent Skills open standard — works with 30+ tools. For operational tooling, see starknet-agentic.

Install & Use

Claude Code

/plugin marketplace add keep-starknet-strange/starknet-skills
/plugin install starknet-skills

Then try:

Audit src/contract.cairo using the cairo-auditor skill

Cursor

Clone the repo:

git clone https://github.com/keep-starknet-strange/starknet-skills.git

Option 1: add the cloned repo as a context directory in Cursor settings.

Option 2: copy the rule file and selected skills into your project. Run the following from your project root and replace /path/to/starknet-skills with your clone path:

cd /path/to/your/project
mkdir -p .cursor/rules .cursor/skills
cp /path/to/starknet-skills/.cursor/rules/starknet-skills.md .cursor/rules/
cp -r /path/to/starknet-skills/cairo-auditor .cursor/skills/cairo-auditor
cp -r /path/to/starknet-skills/cairo-testing .cursor/skills/cairo-testing
cp -r /path/to/starknet-skills/cairo-contract-authoring .cursor/skills/cairo-contract-authoring

Then try:

Write an ERC20 token contract following the cairo-contract-authoring skill

Gemini CLI

Paste the router URL into Gemini CLI chat as context:

https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/SKILL.md

VS Code (GitHub Copilot)

Clone the repo into your workspace, then provide the router URL in Copilot chat via @workspace (or add it as custom context in VS Code settings):

git clone https://github.com/keep-starknet-strange/starknet-skills.git
https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/SKILL.md

OpenAI Codex

Auto-discovered via AGENTS.md at the repo root. Clone and open — Codex reads agent instructions automatically.

JetBrains (Junie)

Clone the repo into your project, then paste the router URL in Junie chat/context:

git clone https://github.com/keep-starknet-strange/starknet-skills.git
https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/SKILL.md

Any agent (universal)

Paste this URL into your agent's chat or config — it auto-routes to the right skill:

https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/SKILL.md

Or load a specific skill directly:

https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/cairo-auditor/SKILL.md
https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/cairo-contract-authoring/SKILL.md
https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/cairo-testing/SKILL.md
https://raw.githubusercontent.com/keep-starknet-strange/starknet-skills/main/cairo-optimization/SKILL.md

Machine-readable index: llms.txt

Example Prompts

After installing, try these in any agent:

What you want What to type
Audit a contract Audit src/vault.cairo for security issues using cairo-auditor
Write a new contract Write an upgradeable ERC721 with Ownable using cairo-contract-authoring
Add tests Add unit and fuzz tests for src/vault.cairo using cairo-testing
Optimize gas Profile and optimize the transfer function using cairo-optimization
Full pipeline Write a staking contract, test it, then audit it

The agent reads the skill, follows its orchestration steps, and produces structured output (findings report, test suite, optimized code, etc.).

First Local Audit (60s)

python scripts/quality/audit_local_repo.py \
  --repo-root /path/to/your/cairo-repo \
  --scan-id local-audit

Optional Sierra confirmation (trusted repos only):

python scripts/quality/audit_local_repo.py \
  --repo-root /path/to/your/cairo-repo \
  --scan-id local-audit-sierra \
  --sierra-confirm \
  --allow-build

Warning: --allow-build may execute repository build steps/tooling. Use build mode only on trusted code, or run in an isolated environment.

Reports are written under <repo-root>/evals/reports/local/ by default (.md, .json). Add --write-findings-jsonl to emit .findings.jsonl. If a target filename already exists, the script appends -N to avoid overwrite.

Skill Modules

Module What LLMs Commonly Miss
cairo-auditor Misses Starknet upgrade/account edge cases and weak FP gates
cairo-contract-authoring Applies Solidity structure directly to Cairo components
cairo-testing Stops at unit tests and skips invariants/adversarial regression coverage
cairo-optimization Optimizes wrong paths without trace/Sierra context
cairo-toolchain Uses stale Scarb/snforge/sncast workflows
account-abstraction Misses session-key/self-call and validation-flow pitfalls
starknet-network-facts Hallucinates network semantics and fee/timing assumptions

Recommended sequence for new contracts: cairo-contract-authoring -> cairo-testing -> cairo-auditor.

Data Pipeline

ingest -> segment -> normalize -> distill -> skillize
  24        26         217          9          7
audits   corpora    findings     assets     skills

Snapshot counts are maintainer-updated. When normalized findings change, update this table and badge labels together.

Quality Signals

Deterministic benchmarks are smoke/regression gates, not final proof of auditor quality.

Methodology

Skills are authored from audit-backed source material, then checked with deterministic gates and held-out evaluation policy before landing. The goal is reusable, high-signal corrections for common Cairo/Starknet failure modes, not generic documentation.

Current workflow:

  • quality.yml is the required per-PR gate.
  • full-evals.yml runs on schedule/workflow dispatch and auto-triggers on pull_request events (opened, synchronize, reopened, ready_for_review) when touched paths match SKILL.md, **/SKILL.md, **/references/**, evals/**, scripts/quality/**, or .github/workflows/**.
  • Build-side generation eval tracks contract authoring quality (prompt -> generated code -> build/test/static checks) as informational telemetry in full-evals.yml.
  • External triage trends live under evals/scorecards/. Evaluation policy: evals/README.md

Website

Contributing

See CONTRIBUTING.md, SECURITY.md, and THIRD_PARTY.md.

Core local gates:

  • python3 scripts/quality/validate_skills.py
  • python3 scripts/quality/validate_marketplace.py
  • python3 scripts/quality/parity_check.py

License

MIT

About

Cairo and Starknet security skills for AI agents. 7 modules, 24 audits, 217 findings.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors