The average knowledge worker has their thinking scattered across browser tabs, Slack threads, email chains, and notebooks that haven’t been opened since last quarter. Most of it is gone the moment the tab closes. The rest is findable in theory and lost in practice.
A second brain fixes that — a single place where your thinking accumulates, connects, and compounds over time. The idea isn’t new.
What is new is what happens when you give that brain to an AI. Not as a search index. As context. Suddenly the AI you’re working with knows about the decision you made three months ago, the constraint you discovered last week, the small but critical detail you’d long forgotten because it was buried in a note from a Tuesday in February. It doesn’t just retrieve — it reasons. It helps you build projects with context no chat window, no SaaS platform, no fresh conversation can match.
The question isn’t whether to build one. It’s whether to build it in a way that actually works — or hand your thinking to someone else’s platform and hope they’re still around in three years.
A video dropped yesterday. “Claude Code + Karpathy’s Obsidian = New Meta.” 189,000 subscribers. Already circulating in the feeds of everyone who thinks about AI and productivity.
I’ve been running this setup for months.
Not because I saw a video. Because I tried everything else first and this is what survived.
I Did It the “Proper” Way First
When I wanted to build a second brain with AI, I did what any technically-minded person does: I reached for the right tools. Vector embeddings. Pinecone. Ingestion pipelines. I built an HR chatbot with N8N and Pinecone as the backend. I tried wiring Notion up with a Pinecone-backed retrieval layer.
These are legitimate approaches. I’ve shipped them in production. I know what they take.
And for a personal knowledge system, they were completely wrong.
Here’s what nobody tells you about RAG: the pipeline is the product. Before you can search your knowledge, you have to build and maintain the system that turns your knowledge into searchable vectors. Every new note is a workflow step. Every source needs chunking, embedding, syncing. When your source material changes, your embeddings drift. The thing that was supposed to help you think now needs its own maintenance schedule.
I didn’t want to maintain a pipeline. I wanted to think.
What I Actually Run
The setup is embarrassingly simple.
Obsidian for the vault. Every note is a markdown file. Every file lives on my machine, backed by a private Git repository.
Claude Code as the AI layer. It talks directly to the filesystem — reads files, writes files, updates notes, maintains structure. No API middleware. No ingestion step. No embeddings.
A CLAUDE.md file that tells Claude the rules of the system: where things live, what conventions to follow, how to behave in this vault specifically.
Session skills — a /session-start that warm-starts every conversation from vault context, and a /session-end that writes a structured note capturing what we did, what decisions were made, and what to pick up next time.
That’s the minimum viable version. If you have Obsidian and any LLM that can interact with the filesystem — Claude Code, Cursor, Windsurf, take your pick — you can build this today.
Why This Beats RAG for Personal Knowledge
Three reasons. All learned the hard way.
1. No ingestion tax.
With RAG, every piece of knowledge has to pass through a pipeline before it’s usable. With this setup, I write a note and it exists. Claude reads it when it’s relevant. That’s the entire workflow. Half the time, I don’t even run /session-start manually. Claude just does it. The friction is so low it effectively disappears.
2. Markdown is portable. Databases aren’t.
Notion is prettier. I genuinely don’t care. Function over style, every time. My notes are markdown files. They open in any editor, on any machine, without an account or an API key. If I switch from Claude Code to something else tomorrow, my vault doesn’t care. The knowledge stays mine. I’ve watched people lose years of Notion content to export limitations. I’ve seen Roam users scrambling when pricing changed. Your knowledge shouldn’t be held hostage to a product decision you had no part in.
3. Data sovereignty.
This is the one I feel most strongly about. The video recommends Pinecone — a SaaS vector database. NotebookLM — Google’s product. The entire “new meta” stack has your most personal knowledge distributed across third-party platforms, each with their own terms of service, their own pricing models, their own sunset risk.
My knowledge lives on my machine and in my own Git repository. Change IDE — still works. Change LLM provider — still works. Anthropic disappears tomorrow — still works.
The Privacy Question You’re Probably Asking
You might be thinking: aren’t you just sending your notes to Anthropic instead of Pinecone? Fair challenge. The difference is storage versus processing — your notes pass through to generate a response and that’s it. I’m on a consumer plan with model training opted out, which takes about ten seconds in account settings. My notes don’t live on Anthropic’s servers. With Pinecone, your data does — permanently, on their infrastructure, under their terms. That’s the meaningful difference.
If you want zero data leaving your machine at all, swap Claude Code for a local model. Ollama works. The vault doesn’t care which LLM is reading it. That’s exactly the point — the system doesn’t depend on any single vendor being trustworthy. You can swap the LLM layer without touching your knowledge. Try doing that with your Pinecone index.
What It Looks Like at Scale
The minimum viable setup — Obsidian plus a file-aware LLM — is genuinely useful from day one.
But I’ve been running something more elaborate. There’s a second agent in this system: Jarvis, running on a Raspberry Pi 5. Jarvis generates my daily briefing each morning, maintains the vault overnight, handles the housekeeping I don’t want to think about. My own entry points now include voice notes from Meta Rayban smart glasses, Telegram messages, and a custom Jarvis UI with TTS. All of it ends up in Obsidian. That’s a different article. The point is: the foundation is just markdown files and a terminal. Everything else is built on top of that.
What I Haven’t Solved Yet
One honest gap: the hyperlink problem.
Obsidian’s power is in the connections between notes — the [[wikilinks]] that build a graph of your thinking. Right now, those links are created manually or as a side effect of Claude working in the vault. There’s no agent that looks at new notes overnight and says: this connects to that, and that connects to this. It’s a solvable problem. I just haven’t built it yet. I mention it because the “new meta” framing tends to imply a finished system. This one isn’t finished. It’s a living thing, and that’s partly why it works.
The Actual New Meta
The video is good. The instinct is right. Reasoning over your knowledge, not just retrieval of it — yes. Structured notes rather than disconnected chunks — yes.
But the “meta” isn’t Claude Code plus Obsidian. The meta is owning your knowledge stack.
Simple enough to maintain. Portable enough to survive tool changes. Private enough that you control what it knows. You don’t need a vector database. You don’t need an embedding pipeline. You need a folder of markdown files and something that can read them.
Start there.
Next: adding an overnight agent to the system — what Jarvis actually does and why it changes everything.