Semantic search and conversational querying across a personal research library.
š Personal Research Assistant MCP
A production-ready MCP (Model Context Protocol) server that enables semantic search across your personal research library. Built for AI Engineers who need fast, accurate document retrieval integrated with Claude Desktop and other AI tools.
šÆ Problem Statement
Researchers and professionals accumulate dozens of papers and documents but struggle to:
- Find relevant information across multiple documents
- Remember which paper contained specific insights
- Connect related concepts across different sources
- Spend 2+ hours daily searching for information
Traditional keyword search misses semantic connections, and reading everything is impractical.
š” Solution
An MCP server that:
- Indexes documents into a vector database using semantic embeddings
- Enables Claude (or any MCP client) to query your research library conversationally
- Provides sub-500ms response times with 85%+ retrieval accuracy
- Includes a Streamlit dashboard for management and metrics
šļø Architecture
Documents (PDF/DOCX/HTML/MD)
ā
Document Processor ā Text Chunker ā Embeddings
ā
ChromaDB Vector Store
ā
āāā MCP Server (FastMCP) ā Claude Desktop
āāā Streamlit UI ā Monitoring/Testing
⨠Features
- Semantic Search: Natural language queries across your entire library
- Multi-Format Support: PDF, DOCX, HTML, Markdown, TXT
- Fast Retrieval: <500ms query latency on 1000+ chunks
- MCP Integration: Works with Claude Desktop, VS Code, and any MCP client
- Metadata Extraction: Automatically extracts titles, authors, keywords
- Query Logging: Track usage and performance metrics
- Streamlit Dashboard: Upload, search, and visualize metrics
š Performance Metrics
| Metric | Target | Actual |
|---|---|---|
| Retrieval Accuracy | 85% | See METRICS.md |
| Query Latency | <500ms | See METRICS.md |
| Scale | 10k+ chunks | 1782+ chunks |
š Installation
Prerequisites
- Python 3.11+
- 2GB RAM minimum
- Git
Setup
# Clone repository
git clone https://github.com/yourusername/research-assistant-mcp.git
cd research-assistant-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install local embeddings
pip install sentence-transformers
# Configure environment
cp .env.example .env
# Edit .env - add OPENAI_API_KEY if using OpenAI embeddings
Download Sample Data
# Download 25 AI/ML papers from arXiv
python scripts/download_sample_papers.py --count 25
Index Documents
# Index sample papers
python scripts/index_docs.py --folder ./sample_papers
# Or index your own documents
python scripts/index_docs.py --folder /path/to/your/papers --recursive
š Usage
Start MCP Server
python mcp_server/server.py
Configure Claude Desktop
Add to claude_desktop_config.json:
Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"research-assistant": {
"command": "python",
"args": ["/full/path/to/research-assistant-mcp/mcp_server/server.py"],
"env": {}
}
}
}
Restart Claude Desktop.
Launch Streamlit UI
streamlit run ui/app.py
Opens at http://localhost:8501
š ļø MCP Tools
`search_documents`
Semantic search across your library.
Query: "What are the challenges in RAG systems?"
Returns: Top-k results with sources, scores, and metadata
`get_document_summary`
Get quick overview of a document.
Input: Document path or title
Returns: Title, author, keywords, preview
`find_related_papers`
Find documents similar to a topic.
Query: "prompt engineering techniques"
Returns: Related papers with relevance scores
š Project Structure
research-assistant-mcp/
āāā mcp_server/ # MCP server implementation
ā āāā server.py
āāā rag_pipeline/ # RAG components
ā āāā config.py
ā āāā document_processor.py
ā āāā chunker.py
ā āāā vector_store.py
ā āāā retriever.py
ā āāā metadata_extractor.py
āāā ui/ # Streamlit dashboard
ā āāā app.py
ā āāā pages/
āāā scripts/ # CLI utilities
ā āāā index_docs.py
ā āāā download_sample_papers.py
āāā tests/ # Testing & benchmarks
ā āāā sample_queries.json
ā āāā benchmark_performance.py
āāā data/ # Data storage
ā āāā chroma_db/
ā āāā query_logs/
āāā docs/ # Documentation
āāā METRICS.md
Tools (3)
search_documentsSemantic search across your library to find relevant information with sources and scores.get_document_summaryGet quick overview of a document including title, author, keywords, and preview.find_related_papersFind documents similar to a specific topic with relevance scores.Environment Variables
OPENAI_API_KEYRequired if using OpenAI embeddings instead of local sentence-transformers.Configuration
{"mcpServers": {"research-assistant": {"command": "python", "args": ["/full/path/to/research-assistant-mcp/mcp_server/server.py"], "env": {}}}}