Before this integration, Cursor (an AI-assisted IDE many of us already use daily) lacked a robust way to persist user context. To solve this, I used Graphiti’s Model Context Protocol (MCP) server, which allows structured data exchange between the IDE and Graphiti's temporal knowledge graph.
Key points of how this works:
- Custom entities like 'Requirement', 'Preference', and 'Procedure' precisely capture coding standards and project specs.
- Real-time updates let Cursor adapt instantly—if you change frameworks or update standards, the memory updates immediately.
- Persistent retrieval ensures Cursor always recalls your latest preferences and project decisions, across new agent sessions, projects, and even after restarting the IDE.
I’d love your feedback—particularly on the approach and how it fits your workflow.
Here's a detailed write-up: https://www.getzep.com/blog/cursor-adding-memory-with-graphi...
GitHub Repo: https://github.com/getzep/graphiti
-Daniel
https://docs.cursor.com/context/rules-for-ai
I think the difference is that Cursor doesn't update its rules automatically as you work, while this might?
You are also wrong that it cannot do this automatically: if you add to the system instruction to record all important decisions in .cursorrules, it will record them there. automatically
in the first conversation round, I asked Claude to grasp the overall project and initialize its memory. Unfortunately, Claude experienced a hallucination and generated an episode that included a full name entirely unrelated to my project's actual full name, as my project name is an abbreviation.
In the second conversation round, I provided Claude with the full name of my project and requested it to correct its memory. In response, Claude apologized and claimed that it now understood the full name of my project, but it did not utilize any MCP command.
In the third conversation round, I specifically asked it to use the MCP command to update its memory. Claude successfully added a new episode but failed to remove the incorrect old episode.
It wasn't until the fourth conversation round that I directly pointed out that it should eliminate the incorrect old episode, and Claude finally completed the memory initialization that should have been accomplished at the end of the first round.
I have set up the correct Cursor Rules according to the README.
At this point, it appears this project is challenging to use with natural language. I need to explicitly instruct Claude on which specific tools to call for various operations to achieve the intended outcome.
Am I doing something wrong?
The requirement for an OpenAI key may also be a little off-putting, or at least, could do with some indication of realistic costs; most Cursor users will likely need a significant motivation to add to the subscription they already have.
Don't get me wrong, this could be a really worthwhile addition to the LLM coding toolset but I think it needs some work on the presentation as to how to get quickly up and running.
There's also a Docker Compose setup: https://github.com/getzep/graphiti/tree/main/mcp_server#runn...
The Cursor MCP setup is also simple:
```{ "mcpServers": { "Graphiti": { "url": "http://localhost:8000/sse" } } }```
Using file pattern matching and automatic attachment makes it much, much more sticky in my experience.
Glancing through the article, I can't tell, is this Cursor specific? Some of us are raw dogging VS Code with https://cline.bot, which supports MCP servers: https://cline.bot/mcp-marketplace.
Would love to try this out in Cline!
For example I'd like to be in control of the archtectural patterns and not let the LLM drive this.
For example ("architecture pattern" - auch as the dependency rule from clean architecture, "development areas" such as frontend, backend)