Skip to content

caseproof/running-llms-locally

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Running LLMs on Your Own Machine

A nine-minute tour of running large language models locally — no cloud, no API keys, no monthly bill.

📽 Watch the presentation

caseproof.github.io/running-llms-locally

What you'll learn

  • Why local — privacy, cost, offline access, and control
  • The three tools to know — GPT4All, LM Studio, and Ollama, and which one fits you
  • Matching models to your hardware — how much RAM you need for 3B, 7B, 13B, and 70B models
  • LocalDocs & RAG — turn any folder into searchable, cited context
  • The Ollama API — point any OpenAI-compatible app at your laptop in two lines
  • Modelfiles — bake custom personas into a model and share them like a gist
  • Obsidian integration — make your notes the AI's long-term memory
  • Mobile — the best on-device LLM apps for iOS and Android

About

Slide deck: Running LLMs on Your Own Machine

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors