██████╗██╗ ██╗ █████╗ ███╗ ██╗ ██╗ ██████╗ ██████╗ ███╗ ██╗ ██████╗ ██╗ ██╗
██╔════╝██║ ██║██╔══██╗████╗ ██║ ██║██╔═══██╗██╔═══██╗████╗ ██║██╔════╝ ╚██╗██╔╝
██║ ███████║███████║██╔██╗ ██║ ██║██║ ██║██║ ██║██╔██╗ ██║██║ ███╗ ╚███╔╝
██║ ██╔══██║██╔══██║██║╚██╗██║██ ██║██║ ██║██║ ██║██║╚██╗██║██║ ██║ ██╔██╗
╚██████╗██║ ██║██║ ██║██║ ╚████║╚█████╔╝╚██████╔╝╚██████╔╝██║ ╚████║╚██████╔╝██╔╝ ██╗
╚═════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═══╝ ╚════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═════╝╚═╝ ╚═╝
Highlights
- Pro
Pinned Loading
-
microgpt-efficiency
microgpt-efficiency Public"Everything else is just for efficiency." — Karpathy's microgpt benchmarked across scalar autograd, NumPy, and PyTorch (RTX 5080)
-
stetkeep
stetkeep PublicXML protocol framework + 16-entry false-positive catalog for Claude Code. Stops AI from refactoring intentional code via PreToolUse hooks, tool-scoped subagents, and path-scoped rules.
JavaScript
-
mobilevit-quantization
mobilevit-quantization PublicFrom-scratch PyTorch implementation of "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer" (ICLR 2022) with FP16, INT8, and INT4 post-training quantization
Python 2
-
microgpt
microgpt 1"""2The most atomic way to train and run inference for a GPT in pure, dependency-free Python.3This file is the complete algorithm.4Everything else is just efficiency.5 -
myofferagent
myofferagent PublicAI career agent — resume building, ATS analysis, job matching & cover letter writing in one conversation | 2026 Global PBL 1st Hackathon, Irvine CA
TypeScript 1
If the problem persists, check the GitHub status page or contact support.




