graphiti/mcp_server/README.md
Daniel Chalef 375023b9e8
feat: MCP Server v1.0.0 - Modular architecture with multi-provider support (#1024)
* feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture

This is a major refactoring of the MCP Server to support multiple providers
through a YAML-based configuration system with factory pattern implementation.

## Key Changes

### Architecture Improvements
- Modular configuration system with YAML-based settings
- Factory pattern for LLM, Embedder, and Database providers
- Support for multiple database backends (Neo4j, FalkorDB, KuzuDB)
- Clean separation of concerns with dedicated service modules

### Provider Support
- **LLM**: OpenAI, Anthropic, Gemini, Groq
- **Embedders**: OpenAI, Voyage, Gemini, Anthropic, Sentence Transformers
- **Databases**: Neo4j, FalkorDB, KuzuDB (new default)
- Azure OpenAI support with AD authentication

### Configuration
- YAML configuration with environment variable expansion
- CLI argument overrides for runtime configuration
- Multiple pre-configured Docker Compose setups
- Proper boolean handling in environment variables

### Testing & CI
- Comprehensive test suite with unit and integration tests
- GitHub Actions workflows for linting and testing
- Multi-database testing support

### Docker Support
- Updated Docker images with multi-stage builds
- Database-specific docker-compose configurations
- Persistent volume support for all databases

### Bug Fixes
- Fixed KuzuDB connectivity checks
- Corrected Docker command paths
- Improved error handling and logging
- Fixed boolean environment variable expansion

Co-authored-by: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01PmXwij9S976CQk798DJ4PH

* fix: Improve MCP server configuration and initialization

- Fix API key detection: Remove hardcoded OpenAI checks, let factories handle provider-specific validation
- Fix .env file loading: Search for .env in mcp_server directory first
- Change default transport to SSE for broader compatibility (was stdio)
- Add proper error handling with warnings for failed client initialization
- Model already defaults to gpt-4o as requested

These changes ensure the MCP server properly loads API keys from .env files
and creates the appropriate LLM/embedder clients based on configuration.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01PCGAWzQUbmh7hAKBvadbYN

* chore: Update default transport from SSE to HTTP

- Changed default transport to 'http' as SSE is deprecated
- Updated all configuration files to use HTTP transport
- Updated Docker compose commands to use HTTP transport
- Updated comments to reflect HTTP transport usage

This change ensures the MCP server uses the recommended HTTP transport
instead of the deprecated SSE transport.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01FErZjFG5iWrvbdUD2acQwQ

* chore: Update default OpenAI model to gpt-4o-mini

Changed the default LLM model from gpt-4o to gpt-4o-mini across all
configuration files for better cost efficiency while maintaining quality.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01FETp6u9mWAMjJAeT6WFgAf

* conductor-checkpoint-msg_01AJJ48RbkPaZ99G2GmE6HUi

* fix: Correct default OpenAI model to gpt-4.1

Changed the default LLM model from gpt-4o-mini to gpt-4.1 as requested.
This is the latest GPT-4 series model.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_013HP1MYHKZ5wdTHHxrpBaT9

* fix: Update hardcoded default model to gpt-4.1 and fix config path

- Changed hardcoded default in schema.py from gpt-4o to gpt-4.1
- Fixed default config path to look in config/config.yaml relative to mcp_server directory
- This ensures the server uses gpt-4.1 as the default model everywhere

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01EaN8GZtehm8LV3a7CdWJ8u

* feat: Add detailed server URL logging and improve access information

- Added comprehensive logging showing exact URLs to access the MCP server
- Display localhost instead of 0.0.0.0 for better usability
- Show MCP endpoint, transport type, and status endpoint information
- Added visual separators to make server info stand out in logs

This helps users understand exactly how to connect to the MCP server
and troubleshoot connection issues.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01SNpbaZMdxWbefo2zsLprcW

* fix: Correct MCP HTTP endpoint path from / to /mcp/

- Remove incorrect /status endpoint reference
- Update logging to show correct MCP endpoint at /mcp/
- Align with FastMCP documentation standards

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01417YVh3s6afJadN5AM5Ahk

* fix: Configure consistent logging format between uvicorn and MCP server

- Use simplified format matching uvicorn's default (LEVEL message)
- Remove timestamps from custom logger format
- Suppress verbose MCP and uvicorn access logs
- Improve readability of server startup output

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_014BF6Kzdy7qXc5AgC7eeVa5

* conductor-checkpoint-msg_01TscHXmijzkqcTJX5sGTYP8

* conductor-checkpoint-msg_01Q7VLFTJrtmpkaB7hfUzZLP

* fix: Improve test runner to load API keys from .env file

- Add dotenv loading support in test runner
- Fix duplicate os import issue
- Improve prerequisite checking with helpful hints
- Update error messages to guide users

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01NfviLNeAhFDA1G5841YKCS

* conductor-checkpoint-msg_015EewQhKbqAGSQkasWtRQjp

* fix: Fix all linting errors in test suite

- Replace bare except with except Exception
- Remove unused imports and variables
- Fix type hints to use modern syntax
- Apply ruff formatting for line length
- Ensure all tests pass linting checks

* conductor-checkpoint-msg_01RedNheKT4yWyXcM83o3Nmv

* fix: Use contextlib.suppress instead of try-except-pass (SIM105)

- Replace try-except-pass with contextlib.suppress in test_async_operations.py
- Replace try-except-pass with contextlib.suppress in test_fixtures.py
- Fixes ruff SIM105 linting errors

* conductor-checkpoint-msg_01GuBj69k2CsqgojBsGJ2zFT

* fix: Move README back to mcp_server root folder

The main README for the MCP server should be in the root of the mcp_server folder for better discoverability

* conductor-checkpoint-msg_01VsJQ3MgDxPwyb4ynvswZfb

* docs: Update README with comprehensive features and database options

- Add comprehensive features list including all supported databases, LLM providers, and transports
- Document Kuzu as the default database with explanation of its benefits and archived status
- Add detailed instructions for running with different databases (Kuzu, Neo4j, FalkorDB)
- Update transport references from SSE to HTTP (default transport)
- Add database-specific Docker Compose instructions
- Update MCP client configurations to use /mcp/ endpoint
- Clarify prerequisites to reflect optional nature of external databases
- Add detailed database configuration examples for all supported backends

* conductor-checkpoint-msg_018Z7AjbTkuDfhqdjB9iGbbD

* docs: Address README review comments

- Shorten Kuzu database description to be more concise
- Update Ollama model example to use 'gpt-oss:120b'
- Restore Azure OpenAI environment variables documentation
- Remove implementation details from Docker section (irrelevant to container users)
- Clarify mcp-remote supports both HTTP and SSE transports

Addresses review comments #1-7 on the PR

* conductor-checkpoint-msg_01QMeMEMe9rTVDgd8Ce5hmXp

* docs: Remove SSE transport reference from Claude Desktop section

Since the MCP server no longer supports SSE transport, removed the
mention of SSE from the mcp-remote description. The server only
uses HTTP transport.

Addresses review comment on line 514

* conductor-checkpoint-msg_01DNn76rvpx7rmTBwsUQd1De

* docs: Remove telemetry from features list

Telemetry is not a feature but a notice about data collection,
so it shouldn't be listed as a feature.

Addresses review comment on line 29

* conductor-checkpoint-msg_01Jcb8sm9bpqB9Ksz1W6YrSz

* feat: Update default embedding model to text-embedding-3-small

Replace outdated text-embedding-ada-002 with the newer, more efficient
text-embedding-3-small model as the default embedder. The new model
offers better performance and is more cost-effective.

Updated:
- config/config.yaml: Changed default model
- README.md: Updated documentation to reflect new default

* conductor-checkpoint-msg_016AXAH98nYKTj5WCueBubmA

* fix: Resolve database connection and episode processing errors

Fixed two critical runtime errors:

1. Database connection check for KuzuDB
   - KuzuDB session.run() returns None, causing async iteration error
   - Added special handling for KuzuDB (in-memory, no query needed)
   - Other databases (Neo4j, FalkorDB) still perform connection test

2. Episode processing parameter error
   - Changed 'episode_type' parameter to 'source' to match Graphiti API
   - Added required 'reference_time' parameter with current timestamp
   - Added datetime imports (UTC, datetime)

Errors fixed:
- 'async for' requires an object with __aiter__ method, got NoneType
- Graphiti.add_episode() got an unexpected keyword argument 'episode_type'

* conductor-checkpoint-msg_01JvcW97a4s3icDWFkhF3kEJ

* fix: Use timezone.utc instead of UTC for Python 3.10 compatibility

The UTC constant was added in Python 3.11. Changed to use
timezone.utc which is available in Python 3.10+.

Fixed ImportError: cannot import name 'UTC' from 'datetime'

* conductor-checkpoint-msg_01Br69UnYf8QXvtAhJVTuDGD

* fix: Convert entity_types from list to dict for Graphiti API

The Graphiti add_episode() API expects entity_types as a
dict[str, type[BaseModel]], not a list. Changed entity type
building to create a dictionary mapping entity names to their
Pydantic model classes.

Fixed error: 'list' object has no attribute 'items'

Changes:
- Build entity_types as dict instead of list in config processing
- Add fallback to convert ENTITY_TYPES list to dict if needed
- Map entity type names to their model classes

* conductor-checkpoint-msg_0173SR9CxbBH9jVWp8tLooRp

* conductor-checkpoint-msg_0169v3hqZG1Sqb13Kp1Vijms

* fix: Remove protected 'name' attribute from entity type models

Pydantic BaseModel reserves 'name' as a protected attribute. Removed
the 'name' attribute from dynamically created entity type models as
it's not needed - the entity type name is already stored as the class
name and dict key.

Fixed error: name cannot be used as an attribute for Requirement as
it is a protected attribute name.

* conductor-checkpoint-msg_0118QJWvZLyoZfwb1UWZqrRa

* conductor-checkpoint-msg_01B78jtT59YDt1Xm5hJpoqQw

* conductor-checkpoint-msg_01MsqeFGoCEXpoNMiDRM3Gjh

* conductor-checkpoint-msg_01SwJkCDAScffk8116KPVpTd

* conductor-checkpoint-msg_01EBWwDRC8bZ7oLYxsrmVnLH

* conductor-checkpoint-msg_01SAcxuF3eqtP4exA47CBqAi

* conductor-checkpoint-msg_011dRKwJM31K3ob9Gy4JCmae

* conductor-checkpoint-msg_018d52yUXdPF48UBWPQdiB4W

* conductor-checkpoint-msg_01MGFAenMDnTX3H9HSZEbj2T

* conductor-checkpoint-msg_01MHw4g8TicrXegSK9phncfw

* conductor-checkpoint-msg_018YrqWa3c2ZpkxemiiaE9tA

* conductor-checkpoint-msg_01SNsax9AwiCBFrC7Fpo7BNe

* conductor-checkpoint-msg_01K7QC1X8iPiYaMdvbi7WtR5

* conductor-checkpoint-msg_01KgGgzpbiuM31KWKxQhNBfY

* conductor-checkpoint-msg_01KL3wzQUn3gekDmznXVgXne

* conductor-checkpoint-msg_016GKc3DYwYUjngGw8pArRJK

* conductor-checkpoint-msg_01QLbhPMGDeB5EHbMq5KT86U

* conductor-checkpoint-msg_01Qdskq96hJ6Q9DPg1h5Jjgg

* conductor-checkpoint-msg_01JhPXYdc6HGsoEW2f1USSyd

* conductor-checkpoint-msg_018NLrtFxs5zfcNwQnNCfvNg

* conductor-checkpoint-msg_01G1G9J7cbupmLkyiQufj335

* conductor-checkpoint-msg_01BHEPsv2EML14gFa6vkn1NP

* conductor-checkpoint-msg_0127MeSvxWk8BLXjB5k3wDJY

* conductor-checkpoint-msg_018dRGHW6fPNqJDN6eV6SpoH

* conductor-checkpoint-msg_01CPPZ9JKakjsmHpzzoFVhaM

* conductor-checkpoint-msg_014jJQ4FkGU4485gF41K2suG

* conductor-checkpoint-msg_01MS72hQDCrr1rB6GSd3zy4h

* conductor-checkpoint-msg_01P7ur6mQEusfHTYpBrBnpk3

* conductor-checkpoint-msg_01JiEiEuJN3sQXheqMzCa6hX

* conductor-checkpoint-msg_01D7XfEJqzTeKGyuE5EFmjND

* conductor-checkpoint-msg_01Gn6qZrD3DZd8c6a6fmMap7

* conductor-checkpoint-msg_01Ji7gxCG4jR145rBAupwU49

* conductor-checkpoint-msg_01CYzyiAtLo95iVLeqWSuYiR

* conductor-checkpoint-msg_017fAeUG21Ym1EeofanFzFGa

* conductor-checkpoint-msg_013rt24pyzMHbrmEQein2dJJ

* conductor-checkpoint-msg_016bN3uyAxN28Rh8uvDpExit

* conductor-checkpoint-msg_017QV6m73ShaMBdQi7L3kmhP

* conductor-checkpoint-msg_01LUZ9XS7C1LCG6A1VFNcRL2

* conductor-checkpoint-msg_0136b9tNU5ko18T3PmRkW3LJ

* conductor-checkpoint-msg_018FX6Mibr66cKLnpL84f2Js

* conductor-checkpoint-msg_01WRZxPMQYjNEjcFNTMzWYeL

* conductor-checkpoint-msg_015Tbxjxrj6dynf7TbZscFD3

* conductor-checkpoint-msg_01ELC9AyZZGry9tN4XKrwEM6

* conductor-checkpoint-msg_01Jk4ugkAqMs4iRYWwnaNAHR

* conductor-checkpoint-msg_01NLStrCDq7HZJy3pKyGSqxM

* conductor-checkpoint-msg_01BFZEVpXbdxuXJguFH3caek

* Remove User and Assistant exception from Preference prioritization

* conductor-checkpoint-msg_01JP4eGXZfEjoSXWUwTHNYoJ

* Add combined FalkorDB + MCP server Docker image

- Created Dockerfile.falkordb-combined extending official FalkorDB image
- Added startup script to run both FalkorDB daemon and MCP server
- Created docker-compose-falkordb-combined.yml for simplified deployment
- Added comprehensive README-falkordb-combined.md documentation
- Updated main README with Option 4 for combined image
- Single container solution for development and single-node deployments

* conductor-checkpoint-msg_01PRJ1fre9d6J4qgBmCBQhCu

* Fix Dockerfile syntax version and Python compatibility

- Set Dockerfile syntax to version 1 as requested
- Use Python 3.11 from Debian Bookworm instead of 3.12
- Add comment explaining Bookworm ships with Python 3.11
- Python 3.11 meets project requirement of >=3.10
- Build tested successfully

* conductor-checkpoint-msg_011Thrsv6CjZKRCXvordMWeb

* Fix combined FalkorDB image to run both services successfully

- Override FalkorDB ENTRYPOINT to use custom startup script
- Use correct FalkorDB module path: /var/lib/falkordb/bin/falkordb.so
- Create config-docker-falkordb-combined.yaml with localhost URI
- Create /var/lib/falkordb/data directory for persistence
- Both FalkorDB and MCP server now start successfully
- Tested: FalkorDB ready, MCP server running on port 8000

* conductor-checkpoint-msg_01FT3bsTuv7466EvCeRtgDsD

* Fix health check to eliminate 404 errors

- Changed health check to only verify FalkorDB (redis-cli ping)
- Removed non-existent /health endpoint check
- MCP server startup is visible in logs
- Container now runs without health check errors

* conductor-checkpoint-msg_01KWBc5S8vWzyovUTWLvPYNw

* Replace Kuzu with FalkorDB as default database

BREAKING CHANGE: Kuzu is no longer supported. FalkorDB is now the default.

- Renamed Dockerfile.falkordb-combined to Dockerfile (default)
- Renamed docker-compose-falkordb-combined.yml to docker-compose.yml (default)
- Updated config.yaml to use FalkorDB with localhost:6379 as default
- Removed Kuzu from pyproject.toml dependencies (now only falkordb extra)
- Updated Dockerfile to use graphiti-core[falkordb] instead of [kuzu,falkordb]
- Completely removed all Kuzu references from README
- Updated README to document FalkorDB combined container as default
- Docker Compose now starts single container with FalkorDB + MCP server
- Prerequisites now require Docker instead of Python for default setup
- Removed old Kuzu docker-compose files

Running from command line now requires external FalkorDB instance at localhost:6379

* conductor-checkpoint-msg_014wBY9WG9GRXP7cUZ2JiqGz

* Complete Kuzu removal from MCP server

Removed all remaining Kuzu references from:
- Test fixtures (test_fixtures.py): Changed default database to falkordb, removed kuzu configuration
- Test runner (run_tests.py): Removed kuzu from database choices, checks, and markers
- Integration tests (test_comprehensive_integration.py): Removed kuzu from parameterized tests and environment setup
- Test README: Updated all examples and documentation to reflect falkordb as default
- Docker README: Completely rewrote to remove KuzuDB section, updated with FalkorDB combined image as default

All Kuzu support has been completely removed from the MCP server codebase. FalkorDB (via combined container) is now the default database backend.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01FAgmoDFBPETezbBr18Bpir

* Fix Anthropic client temperature type error

Fixed pyright type error where temperature parameter (float | None) was being passed directly to Anthropic's messages.create() method which expects (float | Omit).

Changes:
- Build message creation parameters as a dictionary
- Conditionally include temperature only when not None
- Use dictionary unpacking to pass parameters

This allows temperature to be properly omitted when None, rather than passing None as a value.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_01KEuAQucnvsH94BwFgCAQXg

* Fix critical PR review issues

Fixed high-impact issues from PR #1024 code review:

1. **Boolean conversion bug (schema.py)**
   - Fixed _expand_env_vars returning strings 'true'/'false' instead of booleans
   - Now properly converts boolean-like strings (true/false/1/0/yes/no/on/off) to actual booleans
   - Simplified logic by removing redundant string-to-string conversions
   - Added support for common boolean string variations

2. **Dependency management (pyproject.toml)**
   - Removed pytest from main dependencies (now only in dev dependencies)
   - Moved azure-identity to optional dependencies under new [azure] group
   - Prevents forcing Azure and testing dependencies on all users

3. **Conditional Azure imports (utils.py)**
   - Made azure-identity import conditional in create_azure_credential_token_provider()
   - Raises helpful ImportError with installation instructions if not available
   - Follows lazy-import pattern for optional dependencies

4. **Documentation fix (graphiti_mcp_server.py)**
   - Fixed confusing JSON escaping in add_memory docstring example
   - Changed from triple-backslash escaping to standard JSON string
   - Updated comment to clarify standard JSON escaping is used

Issues verified as already fixed:
- Docker build context (all docker-compose files use context: ..)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_013aEa7tUV8rEmfw38BzJatc

* Add comprehensive SEMAPHORE_LIMIT documentation

Added detailed documentation for SEMAPHORE_LIMIT configuration to help users optimize episode processing concurrency based on their LLM provider's rate limits.

Changes:

1. **graphiti_mcp_server.py**
   - Expanded inline comments from 3 lines to 26 lines
   - Added provider-specific tuning guidelines (OpenAI, Anthropic, Azure, Ollama)
   - Documented symptoms of too-high/too-low settings
   - Added monitoring recommendations

2. **README.md**
   - Expanded "Concurrency and LLM Provider 429 Rate Limit Errors" section
   - Added tier-specific recommendations for each provider
   - Explained relationship between episode concurrency and LLM request rates
   - Added troubleshooting symptoms and monitoring guidance
   - Included example .env configuration

3. **config.yaml**
   - Added header comment referencing detailed documentation
   - Noted default value and suitable use case

4. **.env.example**
   - Added SEMAPHORE_LIMIT with inline tuning guidelines
   - Quick reference for all major LLM provider tiers
   - Cross-reference to README for full details

Benefits:
- Users can now make informed decisions about concurrency settings
- Reduces likelihood of 429 rate limit errors from misconfiguration
- Helps users maximize throughput within their rate limits
- Provides clear troubleshooting guidance

Addresses PR #1024 review comment about magic number documentation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_017netxNYzmam5Cu8PM2uQXW

* conductor-checkpoint-msg_012B8ESfBFcMeG3tFimpjbce

* conductor-checkpoint-msg_01Xe46bzgCGV4c8g4piPtSMQ

* conductor-checkpoint-start

* conductor-checkpoint-msg_01QPZK2pa2vUMpURRFmX93Jt

* conductor-checkpoint-msg_01UU5jQcfrW5btRJB3zy5KQZ

* conductor-checkpoint-msg_01884eN3wprtCkrEgEaRDzko

* conductor-checkpoint-msg_01GC2fQiu9gLGPGf8SvG5VW8

* conductor-checkpoint-msg_018ZD567wd4skoiAz7oML7WX

* conductor-checkpoint-msg_01C3AxzcQQSNZxJcuVxAMYpG

* conductor-checkpoint-msg_014w5iHAnv7mVkKfTroeNkuM

* docs: Add current LLM model reference to CLAUDE.md

Added comprehensive model reference section documenting valid model names for OpenAI, Anthropic, and Google Gemini as of January 2025.

OpenAI Models:
- GPT-5 family (reasoning models): gpt-5-mini, gpt-5-nano
- GPT-4.1 family (standard models): gpt-4.1, gpt-4.1-mini, gpt-4.1-nano
- Legacy models: gpt-4o, gpt-4o-mini

Anthropic Models:
- Claude 3.7 family (latest)
- Claude 3.5 family
- Legacy Claude 3 models

Google Gemini Models:
- Gemini 2.5 family (latest)
- Gemini 2.0 family (experimental)
- Gemini 1.5 family (stable)

This documents that model names like gpt-5-mini, gpt-4.1, and gpt-4.1-mini used throughout the codebase are valid OpenAI model identifiers, not errors.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* conductor-checkpoint-msg_014JsovjGyTM1mGwR1nVWLvX

* conductor-checkpoint-msg_013ooHLBEhPccaSY4cFse8vK

* conductor-checkpoint-msg_01WfmUCwXhWxEFtV7R3zJLwT

* conductor-checkpoint-msg_01SbjZ9mm9YwqeJHTDUDoKU8

* conductor-checkpoint-msg_01T2cR1aXUjNSegqzXQcW2jC

* conductor-checkpoint-msg_01EnQy5A9dMFD8F11hWKvzGo

* conductor-checkpoint-msg_01R1zsLmxvwjZ9SwKNhSnQAv

* refactor: Remove duplicate is_reasoning_model calculation in factories.py

* conductor-checkpoint-msg_015oLk8qck3TbfaCryY9gngJ

* conductor-checkpoint-msg_018YAxG5GsLq1dBMuGE6kwEJ

* conductor-checkpoint-msg_014fda5sUsvofb537BvqkuBY

* fix: Change default transport to http, mark SSE as deprecated

* conductor-checkpoint-msg_01S3x8oHkFTM2x4ZiT81QetV

* conductor-checkpoint-msg_01AVxUgejEA9piS6narw4omz

* conductor-checkpoint-msg_019W9KoNBmkobBguViYUj18s

* conductor-checkpoint-msg_01S2mYUmqLohxEmoZaNqsm2f

* conductor-checkpoint-msg_013ZGKfZjdDsqiCkAjAiuEk7

* fix: Handle default config path and empty env vars correctly

- Change default config path from 'config.yaml' to 'config/config.yaml'
- Fix env var expansion to return None for empty strings instead of False
- Prevents validation errors when optional string fields have unset env vars

* conductor-checkpoint-msg_01Bx1BqH3BaBxHMrnsbUQXww

* fix: Allow None for episode_id_prefix and convert to empty string

- Change episode_id_prefix type to str | None to accept None from YAML
- Add model_post_init to convert None to empty string for backward compatibility

* conductor-checkpoint-msg_01CXVkHJC8gp5i395MQMhp6D

* feat: Add helpful error message for database connection failures

- Catch Redis/database connection errors during initialization
- Provide clear, formatted error messages with startup instructions
- Include provider-specific guidance (FalkorDB vs Neo4j)
- Improves developer experience when database is not running

* conductor-checkpoint-msg_01Cd9u1z7pqmX1EG7vXXo4GA

* feat: Add specific Neo4j connection error message with startup instructions

* conductor-checkpoint-msg_01XgbmgFaUMPopni4Q8EhG23

* fix: Remove obsolete KuzuDB check from status endpoint

- Remove dead code checking for 'kuzu' provider (was removed)
- Simplify status check to use configured database provider directly
- Status now correctly reports neo4j or falkordb based on config

* conductor-checkpoint-msg_01WLjwygBwfvbJcVoUMDV3h6

* fix: Use service config instead of global config in status endpoint

- Changed status check to use graphiti_service.config.database.provider
- Ensures status reports the actual running database, not potentially stale global
- Fixes issue where status always reported falkordb regardless of config

* conductor-checkpoint-msg_01DoLD51xqrrdFvq3AgkYuQi

* conductor-checkpoint-msg_01EUW7ArnNM6kHCgFDrQZrro

* conductor-checkpoint-msg_01LqYK6nj1ZFfRNBRP15FMLo

* feat: Add standalone Dockerfile for external database deployments

- Create Dockerfile.standalone for MCP server without embedded FalkorDB
- Supports both Neo4j and FalkorDB via DATABASE_PROVIDER build arg
- Update docker-compose-neo4j.yml to use standalone Dockerfile
- Update docker-compose-falkordb.yml to use standalone Dockerfile
- Fixes issue where Neo4j compose was starting embedded FalkorDB
- Separate images: standalone-neo4j and standalone-falkordb

* conductor-checkpoint-msg_01QSHNgVZvF1id5UtLhpzuUa

* refactor: Unified standalone image with both Neo4j and FalkorDB drivers

- Modified Dockerfile.standalone to install both neo4j and falkordb extras
- Both compose files now use the same standalone image
- Config file determines which database to connect to at runtime
- Added build-standalone.sh script for building and pushing to DockerHub
- Image tags: standalone, {version}-standalone, {version}-graphiti-{core}-standalone

* conductor-checkpoint-msg_01H4isP3oHK25sGpVWzXq9kX

* fix: Correct config file paths in compose files

- Fix CONFIG_PATH env var: /app/config/config.yaml -> /app/mcp/config/config.yaml
- Fix volume mount path: /app/config/config.yaml -> /app/mcp/config/config.yaml
- Matches WORKDIR /app/mcp in Dockerfile.standalone
- Fixes issue where wrong config was being loaded

* conductor-checkpoint-msg_01Pv3Qj9UJJat288xZTsfCm3

* conductor-checkpoint-msg_01DkBq4kQA5Fdmxfikm8aBYG

* conductor-checkpoint-msg_01EBqphY68KNzRWei4QNpcYg

* feat: Add /health endpoint for Docker healthchecks

- Add @mcp.custom_route for /health endpoint using FastMCP
- Returns {status: 'healthy', service: 'graphiti-mcp'}
- Update Dockerfile.standalone healthcheck to use /health instead of /
- Eliminates 404 errors in logs from healthcheck pings
- Follows FastMCP best practices for operational monitoring

* conductor-checkpoint-msg_01UpNeurS45bREPEeGkV3uCx

* feat: Add logging to verify entity types are loaded from config

Added INFO level logging during GraphitiService initialization to confirm
that custom entity types from the configuration file are properly loaded.
This helps debug issues where the entity ontology may not be applied.

Logs the entity type names when custom types are present:
  INFO - Using custom entity types: Preference, Requirement, Procedure, ...

* fix: Correct logging message for entity types and add embedder logging

Fixed copy-paste error where entity types else clause was logging about
embedder client. Also added missing else clause for embedder client logging
for consistency.

- Fixed: "No Embedder client configured" -> "Using default entity types"
- Added: Missing embedder client else clause logging

* conductor-checkpoint-msg_01WMuxAzUnkpsa5WSXKMyLLP

* fix: Return JSONResponse from health check endpoint

Fixed TypeError in health check endpoint by returning a proper Starlette
JSONResponse object instead of a plain dict. Starlette custom routes require
ASGI-compatible response objects.

Error was: TypeError: 'dict' object is not callable

* conductor-checkpoint-msg_01CSgKFQaLsKVrBJAYCFoSGa

* conductor-checkpoint-msg_01SFY9xCnHxeCFGf53FESncs

* feat: Return complete node properties and exclude all embeddings

Enhanced node search results to include all relevant properties:
- Added `attributes` dict for custom entity properties
- Changed from single `type` to full `labels` array
- Added `group_id` for partition information
- Added safety filter to strip any keys containing "embedding" from attributes

Added format_node_result() helper function for consistent node formatting
that excludes name_embedding vectors, matching the pattern used for edges.

Embeddings are now explicitly excluded in all data returns:
- EntityNode: name_embedding excluded + attributes filtered
- EntityEdge: fact_embedding excluded (existing)
- EpisodicNode: No embeddings to exclude

This ensures clients receive complete metadata while keeping payload
sizes manageable and avoiding exposure of internal vector representations.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-30 22:59:01 -07:00

684 lines
21 KiB
Markdown

# Graphiti MCP Server
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents
operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti
continuously integrates user interactions, structured and unstructured enterprise data, and external information into a
coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical
queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI
applications.
This is an experimental Model Context Protocol (MCP) server implementation for Graphiti. The MCP server exposes
Graphiti's key functionality through the MCP protocol, allowing AI assistants to interact with Graphiti's knowledge
graph capabilities.
## Features
The Graphiti MCP server provides comprehensive knowledge graph capabilities:
- **Episode Management**: Add, retrieve, and delete episodes (text, messages, or JSON data)
- **Entity Management**: Search and manage entity nodes and relationships in the knowledge graph
- **Search Capabilities**: Search for facts (edges) and node summaries using semantic and hybrid search
- **Group Management**: Organize and manage groups of related data with group_id filtering
- **Graph Maintenance**: Clear the graph and rebuild indices
- **Graph Database Support**: Multiple backend options including FalkorDB (default) and Neo4j
- **Multiple LLM Providers**: Support for OpenAI, Anthropic, Gemini, Groq, and Azure OpenAI
- **Multiple Embedding Providers**: Support for OpenAI, Voyage, Sentence Transformers, and Gemini embeddings
- **Rich Entity Types**: Built-in entity types including Preferences, Requirements, Procedures, Locations, Events, Organizations, Documents, and more for structured knowledge extraction
- **HTTP Transport**: Default HTTP transport with MCP endpoint at `/mcp/` for broad client compatibility
- **Queue-based Processing**: Asynchronous episode processing with configurable concurrency limits
## Quick Start
### Clone the Graphiti GitHub repo
```bash
git clone https://github.com/getzep/graphiti.git
```
or
```bash
gh repo clone getzep/graphiti
```
### For Claude Desktop and other `stdio` only clients
1. Note the full path to this directory.
```
cd graphiti && pwd
```
2. Install the [Graphiti prerequisites](#prerequisites).
3. Configure Claude, Cursor, or other MCP client to use [Graphiti with a `stdio` transport](#integrating-with-mcp-clients). See the client documentation on where to find their MCP configuration files.
### For Cursor and other HTTP-enabled clients
1. Change directory to the `mcp_server` directory
`cd graphiti/mcp_server`
2. Start the combined FalkorDB + MCP server using Docker Compose (recommended)
```bash
docker compose up
```
This starts both FalkorDB and the MCP server in a single container.
**Alternative**: Run with separate containers using Neo4j:
```bash
docker compose -f docker/docker-compose-neo4j.yml up
```
4. Point your MCP client to `http://localhost:8000/mcp/`
## Installation
### Prerequisites
1. Docker and Docker Compose (for the default FalkorDB setup)
2. OpenAI API key for LLM operations (or API keys for other supported LLM providers)
3. (Optional) Python 3.10+ if running the MCP server standalone with an external FalkorDB instance
### Setup
1. Clone the repository and navigate to the mcp_server directory
2. Use `uv` to create a virtual environment and install dependencies:
```bash
# Install uv if you don't have it already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a virtual environment and install dependencies in one step
uv sync
# Optional: Install additional LLM providers (anthropic, gemini, groq, voyage, sentence-transformers)
uv sync --extra providers
```
## Configuration
The server can be configured using a `config.yaml` file, environment variables, or command-line arguments (in order of precedence).
### Default Configuration
The MCP server comes with sensible defaults:
- **Transport**: HTTP (accessible at `http://localhost:8000/mcp/`)
- **Database**: FalkorDB (combined in single container with MCP server)
- **LLM**: OpenAI with model gpt-5-mini
- **Embedder**: OpenAI text-embedding-3-small
### Database Configuration
#### FalkorDB (Default)
FalkorDB is a Redis-based graph database that comes bundled with the MCP server in a single Docker container. This is the default and recommended setup.
```yaml
database:
provider: "falkordb" # Default
providers:
falkordb:
uri: "redis://localhost:6379"
password: "" # Optional
database: "default_db" # Optional
```
#### Neo4j
For production use or when you need a full-featured graph database, Neo4j is recommended:
```yaml
database:
provider: "neo4j"
providers:
neo4j:
uri: "bolt://localhost:7687"
username: "neo4j"
password: "your_password"
database: "neo4j" # Optional, defaults to "neo4j"
```
#### FalkorDB
FalkorDB is another graph database option based on Redis:
```yaml
database:
provider: "falkordb"
providers:
falkordb:
uri: "redis://localhost:6379"
password: "" # Optional
database: "default_db" # Optional
```
### Configuration File (config.yaml)
The server supports multiple LLM providers (OpenAI, Anthropic, Gemini, Groq) and embedders. Edit `config.yaml` to configure:
```yaml
server:
transport: "http" # Default. Options: stdio, http
llm:
provider: "openai" # or "anthropic", "gemini", "groq", "azure_openai"
model: "gpt-4.1" # Default model
database:
provider: "falkordb" # Default. Options: "falkordb", "neo4j"
```
### Using Ollama for Local LLM
To use Ollama with the MCP server, configure it as an OpenAI-compatible endpoint:
```yaml
llm:
provider: "openai"
model: "gpt-oss:120b" # or your preferred Ollama model
api_base: "http://localhost:11434/v1"
api_key: "ollama" # dummy key required
embedder:
provider: "sentence_transformers" # recommended for local setup
model: "all-MiniLM-L6-v2"
```
Make sure Ollama is running locally with: `ollama serve`
### Entity Types
Graphiti MCP Server includes built-in entity types for structured knowledge extraction. These entity types are always enabled and configured via the `entity_types` section in your `config.yaml`:
**Available Entity Types:**
- **Preference**: User preferences, choices, opinions, or selections (prioritized for user-specific information)
- **Requirement**: Specific needs, features, or functionality that must be fulfilled
- **Procedure**: Standard operating procedures and sequential instructions
- **Location**: Physical or virtual places where activities occur
- **Event**: Time-bound activities, occurrences, or experiences
- **Organization**: Companies, institutions, groups, or formal entities
- **Document**: Information content in various forms (books, articles, reports, videos, etc.)
- **Topic**: Subject of conversation, interest, or knowledge domain (used as a fallback)
- **Object**: Physical items, tools, devices, or possessions (used as a fallback)
These entity types are defined in `config.yaml` and can be customized by modifying the descriptions:
```yaml
graphiti:
entity_types:
- name: "Preference"
description: "User preferences, choices, opinions, or selections"
- name: "Requirement"
description: "Specific needs, features, or functionality"
# ... additional entity types
```
The MCP server automatically uses these entity types during episode ingestion to extract and structure information from conversations and documents.
### Environment Variables
The `config.yaml` file supports environment variable expansion using `${VAR_NAME}` or `${VAR_NAME:default}` syntax. Key variables:
- `NEO4J_URI`: URI for the Neo4j database (default: `bolt://localhost:7687`)
- `NEO4J_USER`: Neo4j username (default: `neo4j`)
- `NEO4J_PASSWORD`: Neo4j password (default: `demodemo`)
- `OPENAI_API_KEY`: OpenAI API key (required for OpenAI LLM/embedder)
- `ANTHROPIC_API_KEY`: Anthropic API key (for Claude models)
- `GOOGLE_API_KEY`: Google API key (for Gemini models)
- `GROQ_API_KEY`: Groq API key (for Groq models)
- `AZURE_OPENAI_API_KEY`: Azure OpenAI API key
- `AZURE_OPENAI_ENDPOINT`: Azure OpenAI endpoint URL
- `AZURE_OPENAI_DEPLOYMENT`: Azure OpenAI deployment name
- `AZURE_OPENAI_EMBEDDINGS_ENDPOINT`: Optional Azure OpenAI embeddings endpoint URL
- `AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT`: Optional Azure OpenAI embeddings deployment name
- `AZURE_OPENAI_API_VERSION`: Optional Azure OpenAI API version
- `USE_AZURE_AD`: Optional use Azure Managed Identities for authentication
- `SEMAPHORE_LIMIT`: Episode processing concurrency. See [Concurrency and LLM Provider 429 Rate Limit Errors](#concurrency-and-llm-provider-429-rate-limit-errors)
You can set these variables in a `.env` file in the project directory.
## Running the Server
### Default Setup (FalkorDB Combined Container)
To run the Graphiti MCP server with the default FalkorDB setup:
```bash
docker compose up
```
This starts a single container with:
- HTTP transport on `http://localhost:8000/mcp/`
- FalkorDB graph database on `localhost:6379`
- FalkorDB web UI on `http://localhost:3000`
- OpenAI LLM with gpt-5-mini model
### Running with Neo4j
#### Option 1: Using Docker Compose
The easiest way to run with Neo4j is using the provided Docker Compose configuration:
```bash
# This starts both Neo4j and the MCP server
docker compose -f docker/docker-compose.neo4j.yaml up
```
#### Option 2: Direct Execution with Existing Neo4j
If you have Neo4j already running:
```bash
# Set environment variables
export NEO4J_URI="bolt://localhost:7687"
export NEO4J_USER="neo4j"
export NEO4J_PASSWORD="your_password"
# Run with Neo4j
uv run graphiti_mcp_server.py --database-provider neo4j
```
Or use the Neo4j configuration file:
```bash
uv run graphiti_mcp_server.py --config config/config-docker-neo4j.yaml
```
### Running with FalkorDB
#### Option 1: Using Docker Compose
```bash
# This starts both FalkorDB (Redis-based) and the MCP server
docker compose -f docker/docker-compose.falkordb.yaml up
```
#### Option 2: Direct Execution with Existing FalkorDB
```bash
# Set environment variables
export FALKORDB_URI="redis://localhost:6379"
export FALKORDB_PASSWORD="" # If password protected
# Run with FalkorDB
uv run graphiti_mcp_server.py --database-provider falkordb
```
Or use the FalkorDB configuration file:
```bash
uv run graphiti_mcp_server.py --config config/config-docker-falkordb.yaml
```
### Available Command-Line Arguments
- `--config`: Path to YAML configuration file (default: config.yaml)
- `--llm-provider`: LLM provider to use (openai, anthropic, gemini, groq, azure_openai)
- `--embedder-provider`: Embedder provider to use (openai, azure_openai, gemini, voyage)
- `--database-provider`: Database provider to use (falkordb, neo4j) - default: falkordb
- `--model`: Model name to use with the LLM client
- `--temperature`: Temperature setting for the LLM (0.0-2.0)
- `--transport`: Choose the transport method (http or stdio, default: http)
- `--group-id`: Set a namespace for the graph (optional). If not provided, defaults to "main"
- `--destroy-graph`: If set, destroys all Graphiti graphs on startup
### Concurrency and LLM Provider 429 Rate Limit Errors
Graphiti's ingestion pipelines are designed for high concurrency, controlled by the `SEMAPHORE_LIMIT` environment variable. This setting determines how many episodes can be processed simultaneously. Since each episode involves multiple LLM calls (entity extraction, deduplication, summarization), the actual number of concurrent LLM requests will be several times higher.
**Default:** `SEMAPHORE_LIMIT=10` (suitable for OpenAI Tier 3, mid-tier Anthropic)
#### Tuning Guidelines by LLM Provider
**OpenAI:**
- Tier 1 (free): 3 RPM → `SEMAPHORE_LIMIT=1-2`
- Tier 2: 60 RPM → `SEMAPHORE_LIMIT=5-8`
- Tier 3: 500 RPM → `SEMAPHORE_LIMIT=10-15`
- Tier 4: 5,000 RPM → `SEMAPHORE_LIMIT=20-50`
**Anthropic:**
- Default tier: 50 RPM → `SEMAPHORE_LIMIT=5-8`
- High tier: 1,000 RPM → `SEMAPHORE_LIMIT=15-30`
**Azure OpenAI:**
- Consult your quota in Azure Portal and adjust accordingly
- Start conservative and increase gradually
**Ollama (local):**
- Hardware dependent → `SEMAPHORE_LIMIT=1-5`
- Monitor CPU/GPU usage and adjust
#### Symptoms
- **Too high**: 429 rate limit errors, increased API costs from parallel processing
- **Too low**: Slow episode throughput, underutilized API quota
#### Monitoring
- Watch logs for `429` rate limit errors
- Monitor episode processing times in server logs
- Check your LLM provider's dashboard for actual request rates
- Track token usage and costs
Set this in your `.env` file:
```bash
SEMAPHORE_LIMIT=10 # Adjust based on your LLM provider tier
```
### Docker Deployment
The Graphiti MCP server can be deployed using Docker with your choice of database backend. The Dockerfile uses `uv` for package management, ensuring consistent dependency installation.
A pre-built Graphiti MCP container is available at: `zepai/knowledge-graph-mcp`
#### Environment Configuration
Before running Docker Compose, configure your API keys using a `.env` file (recommended):
1. **Create a .env file in the mcp_server directory**:
```bash
cd graphiti/mcp_server
cp .env.example .env
```
2. **Edit the .env file** to set your API keys:
```bash
# Required - at least one LLM provider API key
OPENAI_API_KEY=your_openai_api_key_here
# Optional - other LLM providers
ANTHROPIC_API_KEY=your_anthropic_key
GOOGLE_API_KEY=your_google_key
GROQ_API_KEY=your_groq_key
# Optional - embedder providers
VOYAGE_API_KEY=your_voyage_key
```
**Important**: The `.env` file must be in the `mcp_server/` directory (the parent of the `docker/` subdirectory).
#### Running with Docker Compose
**All commands must be run from the `mcp_server` directory** to ensure the `.env` file is loaded correctly:
```bash
cd graphiti/mcp_server
```
##### Option 1: FalkorDB Combined Container (Default)
Single container with both FalkorDB and MCP server - simplest option:
```bash
docker compose up
```
##### Option 2: Neo4j Database
Separate containers with Neo4j and MCP server:
```bash
docker compose -f docker/docker-compose-neo4j.yml up
```
Default Neo4j credentials:
- Username: `neo4j`
- Password: `demodemo`
- Bolt URI: `bolt://neo4j:7687`
- Browser UI: `http://localhost:7474`
##### Option 3: FalkorDB with Separate Containers
Alternative setup with separate FalkorDB and MCP server containers:
```bash
docker compose -f docker/docker-compose-falkordb.yml up
```
FalkorDB configuration:
- Redis port: `6379`
- Web UI: `http://localhost:3000`
- Connection: `redis://falkordb:6379`
#### Accessing the MCP Server
Once running, the MCP server is available at:
- **HTTP endpoint**: `http://localhost:8000/mcp/`
- **Health check**: `http://localhost:8000/health`
#### Running Docker Compose from a Different Directory
If you run Docker Compose from the `docker/` subdirectory instead of `mcp_server/`, you'll need to modify the `.env` file path in the compose file:
```yaml
# Change this line in the docker-compose file:
env_file:
- path: ../.env # When running from mcp_server/
# To this:
env_file:
- path: .env # When running from mcp_server/docker/
```
However, **running from the `mcp_server/` directory is recommended** to avoid confusion.
## Integrating with MCP Clients
### VS Code / GitHub Copilot
VS Code with GitHub Copilot Chat extension supports MCP servers. Add to your VS Code settings (`.vscode/mcp.json` or global settings):
```json
{
"mcpServers": {
"graphiti": {
"uri": "http://localhost:8000/mcp/",
"transport": {
"type": "http"
}
}
}
}
```
### Other MCP Clients
To use the Graphiti MCP server with other MCP-compatible clients, configure it to connect to the server:
> [!IMPORTANT]
> You will need the Python package manager, `uv` installed. Please refer to the [`uv` install instructions](https://docs.astral.sh/uv/getting-started/installation/).
>
> Ensure that you set the full path to the `uv` binary and your Graphiti project folder.
```json
{
"mcpServers": {
"graphiti-memory": {
"transport": "stdio",
"command": "/Users/<user>/.local/bin/uv",
"args": [
"run",
"--isolated",
"--directory",
"/Users/<user>>/dev/zep/graphiti/mcp_server",
"--project",
".",
"graphiti_mcp_server.py",
"--transport",
"stdio"
],
"env": {
"NEO4J_URI": "bolt://localhost:7687",
"NEO4J_USER": "neo4j",
"NEO4J_PASSWORD": "password",
"OPENAI_API_KEY": "sk-XXXXXXXX",
"MODEL_NAME": "gpt-4.1-mini"
}
}
}
}
```
For HTTP transport (default), you can use this configuration:
```json
{
"mcpServers": {
"graphiti-memory": {
"transport": "http",
"url": "http://localhost:8000/mcp/"
}
}
}
```
## Available Tools
The Graphiti MCP server exposes the following tools:
- `add_episode`: Add an episode to the knowledge graph (supports text, JSON, and message formats)
- `search_nodes`: Search the knowledge graph for relevant node summaries
- `search_facts`: Search the knowledge graph for relevant facts (edges between entities)
- `delete_entity_edge`: Delete an entity edge from the knowledge graph
- `delete_episode`: Delete an episode from the knowledge graph
- `get_entity_edge`: Get an entity edge by its UUID
- `get_episodes`: Get the most recent episodes for a specific group
- `clear_graph`: Clear all data from the knowledge graph and rebuild indices
- `get_status`: Get the status of the Graphiti MCP server and Neo4j connection
## Working with JSON Data
The Graphiti MCP server can process structured JSON data through the `add_episode` tool with `source="json"`. This
allows you to automatically extract entities and relationships from structured data:
```
add_episode(
name="Customer Profile",
episode_body="{\"company\": {\"name\": \"Acme Technologies\"}, \"products\": [{\"id\": \"P001\", \"name\": \"CloudSync\"}, {\"id\": \"P002\", \"name\": \"DataMiner\"}]}",
source="json",
source_description="CRM data"
)
```
## Integrating with the Cursor IDE
To integrate the Graphiti MCP Server with the Cursor IDE, follow these steps:
1. Run the Graphiti MCP server using the default HTTP transport:
```bash
uv run graphiti_mcp_server.py --group-id <your_group_id>
```
Hint: specify a `group_id` to namespace graph data. If you do not specify a `group_id`, the server will use "main" as the group_id.
or
```bash
docker compose up
```
2. Configure Cursor to connect to the Graphiti MCP server.
```json
{
"mcpServers": {
"graphiti-memory": {
"url": "http://localhost:8000/mcp/"
}
}
}
```
3. Add the Graphiti rules to Cursor's User Rules. See [cursor_rules.md](cursor_rules.md) for details.
4. Kick off an agent session in Cursor.
The integration enables AI assistants in Cursor to maintain persistent memory through Graphiti's knowledge graph
capabilities.
## Integrating with Claude Desktop (Docker MCP Server)
The Graphiti MCP Server uses HTTP transport (at endpoint `/mcp/`). Claude Desktop does not natively support HTTP transport, so you'll need to use a gateway like `mcp-remote`.
1. **Run the Graphiti MCP server**:
```bash
docker compose up
# Or run directly with uv:
uv run graphiti_mcp_server.py
```
2. **(Optional) Install `mcp-remote` globally**:
If you prefer to have `mcp-remote` installed globally, or if you encounter issues with `npx` fetching the package, you can install it globally. Otherwise, `npx` (used in the next step) will handle it for you.
```bash
npm install -g mcp-remote
```
3. **Configure Claude Desktop**:
Open your Claude Desktop configuration file (usually `claude_desktop_config.json`) and add or modify the `mcpServers` section as follows:
```json
{
"mcpServers": {
"graphiti-memory": {
// You can choose a different name if you prefer
"command": "npx", // Or the full path to mcp-remote if npx is not in your PATH
"args": [
"mcp-remote",
"http://localhost:8000/mcp/" // The Graphiti server's HTTP endpoint
]
}
}
}
```
If you already have an `mcpServers` entry, add `graphiti-memory` (or your chosen name) as a new key within it.
4. **Restart Claude Desktop** for the changes to take effect.
## Requirements
- Python 3.10 or higher
- OpenAI API key (for LLM operations and embeddings) or other LLM provider API keys
- MCP-compatible client
- Docker and Docker Compose (for the default FalkorDB combined container)
- (Optional) Neo4j database (version 5.26 or later) if not using the default FalkorDB setup
## Telemetry
The Graphiti MCP server uses the Graphiti core library, which includes anonymous telemetry collection. When you initialize the Graphiti MCP server, anonymous usage statistics are collected to help improve the framework.
### What's Collected
- Anonymous identifier and system information (OS, Python version)
- Graphiti version and configuration choices (LLM provider, database backend, embedder type)
- **No personal data, API keys, or actual graph content is ever collected**
### How to Disable
To disable telemetry in the MCP server, set the environment variable:
```bash
export GRAPHITI_TELEMETRY_ENABLED=false
```
Or add it to your `.env` file:
```
GRAPHITI_TELEMETRY_ENABLED=false
```
For complete details about what's collected and why, see the [Telemetry section in the main Graphiti README](../README.md#telemetry).
## License
This project is licensed under the same license as the parent Graphiti project.