Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: AMP – open-source memory server for AI agents (MCP, SQLite, D3.js) (github.com/akshayaggarwal99)
5 points by akshayaggarwal 1 day ago | hide | past | favorite | 1 comment
Hi HN,

I’m Akshay. I built AMP because I was tired of my AI agents having "amnesia" the moment I closed the terminal.

Like many of you, I use Claude/Cursor daily. RAG is great for searching documentation, but it’s terrible for continuity. It chunks text blindly, losing the narrative. When I asked my agent "Why did we decide to use FastAPI last week?", it would hallucinate or just give me generic pros/cons because the specific context of our decision was lost in a vector soup.

So I decided to build a proper *Hippocampus* for my agents.

*What is it?* AMP is a local-first memory server that sits between your agent and its LLM. It implements a "3-Layer Brain": 1. *STM*: A scratchpad for what we are doing right now. 2. *LTM*: Consolidated facts and insights (promoted from STM). 3. *Galaxy Graph*: A force-directed knowledge graph (d3.js) that physically links related concepts.

*The Tech Stack* * *Protocol*: Built natively for the *Model Context Protocol (MCP)*. If you use Claude Desktop or Cursor, it works out of the box. * *Backend*: Python, FastAPI, SQLite (no docker needed). * *Search*: Hybrid (Keyword + Vector) using FastEmbed. * *Viz*: A 60fps local dashboard to actually see your agent's brain growing.

*Benchmarks* I didn't want just another vector wrapper, so I focused heavily on retention. I benchmarked it against Mem0 (a popular alternative) on the LoCoMo dataset. * *AMP*: 81.6% Recall (Context-First) * *Mem0*: 21.7% Recall (Extraction-First)

It turns out that preserving the narrative (who/what/when) before summarizing is key to avoiding "I don't know" answers.

I’d love to hear your thoughts on the architecture. Does your agent need a hippocampus?

Repo: https://github.com/akshayaggarwal99/amp





I was ready to hate this, guessing it was a low-effort collection of text files.

But the approach looks very thoughtful.

I turned memory on for Claude, and quickly discovered that I didn’t want fragments of old conversations haunting new ones.

OP: how’s your day to day been using this? Do you find the memories are sometimes unwanted/off topic?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: