G-Retriever Obsidian - Part 2: Building the Plugin

G-Retriever ready! Ask me anything about my notes.

Posted on December 25, 2025 by Mikel Bahn
G-Retriever Obsidian Plugin - Documentation

G-Retriever Obsidian Plugin

Chat with Your Vault - Part 2: Building the Plugin

G-Retriever ready! Ask me anything about your notes.
What are the main concepts in my machine learning notes?
Based on your notes, the key concepts are: Neural Networks with Backpropagation, Gradient Descent optimization, Loss Functions (MSE, Cross-Entropy)...
Sources: Neural Networks, Backpropagation, Gradient Descent

Overview

This guide shows you how to transform your G-Retriever system into a fully functional Obsidian plugin. Instead of running queries from the command line, users can now chat directly within Obsidian!

What you'll build:

  • A chat interface embedded in Obsidian's sidebar
  • Auto-connecting backend with smart port detection
  • Real-time status indicators
  • Beautiful UI that matches Obsidian's theme
  • Easy distribution for end users

✅ Prerequisites:

You should have completed Part 1 and have:

  • Working G-Retriever system with trained/untrained models
  • Generated graph from your Obsidian vault
  • Ollama running with llama3:8b
  • All Python modules from Part 1

Why Build an Obsidian Plugin?

?

Native Integration

Chat interface lives directly in Obsidian, matching its look and feel perfectly.

Instant Access

No switching between terminal and editor. Ask questions while writing.

?

Context Aware

Plugin knows which vault you're in, automatically using the right graph.

?

Easy Distribution

Share with others - they just enable the plugin and start chatting.

Plugin Architecture

How it Works

Obsidian Plugin (JavaScript)
↔️
Flask API (Python)
↔️
G-Retriever Backend
↔️
Ollama LLM

Component Breakdown

Frontend (Obsidian Plugin)

  • Language: JavaScript
  • UI: Custom chat interface
  • Communication: REST API calls
  • Files: main.js, styles.css, manifest.json

Backend (Python API)

  • Framework: Flask
  • Port: Auto-detected (5000+)
  • Endpoints: /query, /health
  • File: api_server.py

Why This Approach?

✅ Pros of Hybrid Architecture

  • Keep full Python/PyTorch power
  • Native Obsidian integration
  • Easy to develop and debug
  • Can use any Python library
  • Ollama stays external (good!)

⚠️ Trade-offs

  • Users must start Python server
  • Not a "pure" Obsidian plugin
  • Requires Python installation
  • Two-component setup

Alternative Approaches Considered:

  • Pure JavaScript: Would require rewriting everything in TensorFlow.js - massive effort, worse performance
  • Python Binary: Package Python as executable - works but creates 500MB+ downloads
  • Hybrid (Current): Best balance of power and usability

⚙️ Installation Guide

Project Structure

Important: Files go in TWO different places!

Your Python Project (G_Obsidian/)
├── api_server.py ← NEW
├── gretriever_inference.py
├── train_gretriever.py
├── graph_output/
├── training_data/
├── START_SERVER.command ← NEW
└── requirements.txt

Your Obsidian Vault
└── .obsidian/
    └── plugins/
        └── g-retriever-chat/
            ├── manifest.json ← NEW
            ├── main.js ← NEW
            └── styles.css ← NEW

Step-by-Step Setup

1

Install Flask Dependencies

Add API server capabilities to your Python project:

cd /path/to/G_Obsidian
pip install flask flask-cors
2

Create API Server

Create api_server.py in your Python project. This bridges Obsidian and G-Retriever.

Key Features:
  • Auto-detects free port (no conflicts!)
  • Saves config for plugin to read
  • CORS enabled for Obsidian
  • Simple REST endpoints
3

Create Plugin Folder

In your Obsidian vault, create the plugin directory:

mkdir -p /path/to/vault/.obsidian/plugins/g-retriever-chat
Note: The .obsidian folder is hidden! On Mac, press Cmd+Shift+. to show hidden files.
4

Add Plugin Files

Create three files in the plugin folder:

  • manifest.json - Plugin metadata
  • main.js - Plugin logic and UI
  • styles.css - Beautiful styling
5

Create Start Script

Make it easy for users to start the backend:

Mac/Linux: START_SERVER.command

#!/bin/bash
cd "$(dirname "$0")"
source venv/bin/activate
python api_server.py

Windows: START_SERVER.bat

@echo off
cd /d %~dp0
call venv\Scripts\activate
python api_server.py
pause

Make executable on Mac/Linux:

chmod +x START_SERVER.command
Happy about a ☕️ https://ko-fi.com/mikelbahn if you like the project.

Development Details

API Server (api_server.py)

The Flask server is the bridge between your plugin and G-Retriever. Key features:

Auto Port Detection

def find_free_port(start_port=5000):
    for port in range(start_port, start_port + 10):
        try:
            sock = socket.socket()
            sock.bind(('', port))
            sock.close()
            return port
        except OSError:
            continue

Config Persistence

def save_port_config(port):
    config = {
        "port": port,
        "url": f"http://localhost:{port}"
    }
    with open(Path.home() / ".g-retriever-config.json", 'w') as f:
        json.dump(config, f)

Query Endpoint

@app.route('/query', methods=['POST'])
def query():
    data = request.get_json()
    question = data.get('question', '')
    result = retriever.query(question)
    return jsonify(result)

Health Check

@app.route('/health', methods=['GET'])
def health():
    return jsonify({
        'status': 'ok',
        'message': 'G-Retriever running'
    })

Plugin Architecture (main.js)

Core Classes

Class Purpose Key Methods
GRetrieverPlugin Main plugin controller onload(), activateView(), getApiUrl()
GRetrieverChatView Chat UI and logic sendMessage(), checkBackendStatus()

Smart Backend Connection

The plugin automatically finds the backend by:

  1. Reading ~/.g-retriever-config.json (auto-generated by server)
  2. Falling back to http://localhost:5000
  3. Testing connection with /health endpoint
  4. Showing clear status indicator to user
async getApiUrl() {
    const configPath = path.join(os.homedir(), '.g-retriever-config.json');
    
    try {
        if (fs.existsSync(configPath)) {
            const config = JSON.parse(fs.readFileSync(configPath, 'utf8'));
            return config.url;
        }
    } catch (error) {
        console.error('Error reading config:', error);
    }
    
    // Fallback
    return 'http://localhost:5000';
}

UI Components

Status Bar

Shows connection status with color indicators:

  • Connected - Backend is running and responsive
  • Not Connected - Backend not found or not responding
  • Checking... - Testing connection

Chat Interface

  • User messages: Right-aligned, accent color
  • Assistant messages: Left-aligned, secondary background
  • System messages: Centered, muted style
  • Sources: Small italic text showing which notes were used

Input Area

  • Resizable textarea for questions
  • Send button
  • Keyboard shortcut: Ctrl/Cmd + Enter

Styling Philosophy

Theme Integration

The plugin uses Obsidian's CSS variables to perfectly match any theme:

  • --background-primary - Main background
  • --background-secondary - Panel background
  • --text-normal - Text color
  • --interactive-accent - Accent color
  • --text-on-accent - Text on accent

This means it works beautifully with any Obsidian theme!

Usage

For Developers (You)

1

Start Backend

cd /path/to/G_Obsidian
python api_server.py
Or double-click START_SERVER.command

You should see:

Initializing G-Retriever...
GraphRetriever ready!
============================================================
G-Retriever API Server
Running on http://localhost:5000
============================================================
2

Enable Plugin in Obsidian

  1. Open Obsidian Settings (⚙️)
  2. Go to Community plugins
  3. Click the Reload button
  4. Find "G-Retriever Chat" in the list
  5. Toggle it ON
3

Open Chat Interface

Three ways to open:

  • Click the icon in the left ribbon
  • Press Ctrl/Cmd + P and type "G-Retriever"
  • Use the command palette

The chat panel opens in the right sidebar!

4

Start Chatting!

  • Type your question in the input box
  • Press Ctrl/Cmd + Enter or click "Send"
  • G-Retriever will search your notes and respond
  • Sources are shown below each answer

Example Queries

Factual Questions

  • "What is gradient descent?"
  • "How do I implement backpropagation?"
  • "What are the main Python libraries I use?"

Connection Questions

  • "How do neural networks relate to deep learning?"
  • "What projects use React?"
  • "Show connections between my ML notes"

Summary Questions

  • "Summarize my notes on transformers"
  • "What have I learned about Python?"
  • "Overview of my project ideas"

Specific Lookups

  • "Find notes mentioning PyTorch"
  • "What did I write about attention mechanisms?"
  • "Show me graph-related notes"

Distribution

For End Users

When sharing your plugin, users need to:

1

Prerequisites

  • Python 3.9+ installed
  • Ollama running with llama3:8b
  • Your G_Obsidian package with their graph
2

Install Plugin

Copy the plugin folder to their vault:

cp -r g-retriever-chat /path/to/their/vault/.obsidian/plugins/
3

Start Server

Double-click START_SERVER.command (keep window open)

4

Enable in Obsidian

Settings → Community Plugins → Enable "G-Retriever Chat"

Distribution Package

Create a simple distribution package with this structure:

G-Retriever-Plugin-Package/
├── README.md
├── requirements.txt
├── START_SERVER.command
├── START_SERVER.bat
├── plugin/
│ ├── manifest.json
│ ├── main.js
│ └── styles.css
└── python/
    ├── api_server.py
    ├── gretriever_inference.py
    └── ... (all other Python files)

User README Template

README.md

# G-Retriever Chat for Obsidian
Quick Start
1. Install Python Dependencies
bashcd python/
pip install -r requirements.txt
2. Generate Your Graph
bashpython obsidian_to_graph.py /path/to/your/vault
python generate_training_data.py
3. Install Plugin
Copy the plugin/ folder to:

Mac/Linux: ~/.obsidian/plugins/g-retriever-chat/
Windows: C:\Users\YourName\.obsidian\plugins\g-retriever-chat\

4. Start Server

Mac/Linux: Double-click START_SERVER.command
Windows: Double-click START_SERVER.bat

5. Enable Plugin
Obsidian → Settings → Community Plugins → Enable "G-Retriever Chat"
6. Chat!
Click the icon in the sidebar
Troubleshooting
Backend not connecting?

Make sure START_SERVER script is running
Check that Ollama is running: ollama list
Try restarting the backend

Plugin not showing up?

Verify files are in .obsidian/plugins/g-retriever-chat/
Click reload button in Community Plugins
Check Developer Console (Ctrl+Shift+I) for errors

Port already in use?

The server auto-detects free ports
Check .g-retriever-config.json in your home directory

Support
For issues, check the GitHub repository or open an issue.

Advanced Topics

Customization Options

Custom Styling

Users can customize the chat appearance by editing styles.css:

/* Change chat colors */
.g-retriever-user {
background: #your-color;
}
/* Adjust sizes */
.g-retriever-input {
min-height: 100px;
}

Backend Configuration

Modify api_server.py to:

  • Change default port range
  • Adjust timeout values
  • Add custom endpoints
  • Enable debug logging

Performance Tuning

  • Adjust k_retrieve parameter
  • Change context length limit
  • Modify temperature settings
  • Cache frequently accessed nodes

Add Features

Extend the plugin with:

  • Chat history export
  • Favorite queries
  • Multi-vault support
  • Note insertion from chat

Troubleshooting Common Issues

Issue Cause Solution
Plugin not appearing Files in wrong location Verify path: .obsidian/plugins/g-retriever-chat/
"Backend not responding" Server not running Start api_server.py
Port conflict 5000-5010 occupied Auto-detection handles this - check config file
Slow responses Large vault or slow Ollama Reduce k_retrieve, use faster Ollama model
Wrong notes retrieved Graph outdated Regenerate graph: python obsidian_to_graph.py

Future Enhancements

Potential Features to Add:

  • Auto-refresh: Detect vault changes and regenerate graph automatically
  • Context menu: Right-click on notes to ask questions about them
  • Inline questions: Select text and ask questions via context menu
  • Export conversations: Save chat history as markdown notes
  • Multi-model support: Switch between different Ollama models
  • Voice input: Ask questions via microphone
  • Graph visualization: Show which notes are connected to your query
  • Settings panel: Configure behavior without editing code

Conclusion

You've Built a Full-Featured Plugin!

Congratulations! You now have:

  • ✅ A beautiful chat interface in Obsidian
  • ✅ Smart backend with auto-detection
  • ✅ Real-time status indicators
  • ✅ Theme-aware styling
  • ✅ Easy distribution for users

Key Takeaways

Architecture

Hybrid approach combines JavaScript frontend with Python backend for best of both worlds.

Integration

Using Obsidian's CSS variables ensures perfect theme compatibility.

⚡ User Experience

Auto-detection and clear status indicators make setup painless.

Distribution

Simple start scripts and clear README make sharing easy.

What's Next?

Continue the Journey:

  • Share your plugin with the community
  • Add custom features based on your needs
  • Experiment with different models and parameters
  • Build other Obsidian plugins using this template
  • Contribute improvements back to the project
Happy about a ☕️ https://ko-fi.com/mikelbahn if you like the project.