n8n Integration

Connect George AI with n8n to automate document ingestion, build AI agents, and create powerful workflows

What is n8n Integration?

n8n is a workflow automation tool that allows you to connect George AI with hundreds of other services. George AI provides pre-built workflow templates for common integration patterns, making it easy to automate document ingestion and build custom AI agents.

Automated Document Ingestion

Automatically ingest emails, attachments, and files from external sources into George AI libraries

AI Agent Experiments

Build and test RAG-powered AI agents that can use George AI's data through custom tools

Prerequisites

George AI Running
n8n Installed
Library Created
API Key Generated

1. George AI Running

pnpm dev
Server running on http://localhost:3003

2. n8n Running

npx n8n
n8n ready on http://localhost:5678

3. Create Library

Create a library in George AI where documents will be ingested

4. Generate API Key

Generate an API key for your library to authenticate n8n requests

Docker Networking Important

n8n runs in Docker and must use service names, not localhost. Use http://app:3003/graphql not http://localhost:3003/graphql

Available Workflow Templates

Production

Gmail to George AI

Automatically ingest Gmail emails and attachments into George AI libraries

Features:

Automatic Polling Manual Bulk Import Duplicate Detection Full Metadata Attachments
Quick Setup
  1. Import gmail-to-george-ai.json into n8n
  2. Configure George AI API Key credential (Header Auth):
    • Name: Authorization
    • Value: Bearer YOUR_API_KEY
  3. Configure Gmail OAuth2 credential
  4. Edit "Set Configuration" node with your libraryId and URLs
  5. Activate workflow for automatic polling or use manual trigger for testing
Experimental

RAG Chatbot Experiment

AI Agent-powered chatbot for prototyping RAG patterns with George AI tools

Features:

AI Agent Similarity Search Tool File Listing Tool Conversation Memory Ollama LLM
Requires Ollama running with a model installed (e.g., llama3.1:8b)
Quick Setup
  1. Import rag-chatbot-experiment.json into n8n
  2. Configure George AI API Key credential
  3. Edit "Set Configuration" node:
    • libraryId: Your library ID
    • graphqlUrl: http://app:3003/graphql
    • ollamaUrl: http://gai-ollama:11434
    • chatModel: Your Ollama model (e.g., llama3.1:8b)
  4. Test the chat interface in n8n

Configuration Guide

API Key Configuration

In n8n, create a new Header Auth credential:

Credential Type: Header Auth
Name: Authorization
Value: Bearer YOUR_API_KEY_HERE
The Bearer prefix with a space is required!

Docker Service URLs

Service Correct (from n8n) Wrong
George AI GraphQL http://app:3003/graphql http://localhost:3003/graphql
George AI Upload http://app:3003/upload http://localhost:3003/upload
Ollama http://gai-ollama:11434 http://localhost:11434
Your browser uses localhost, but n8n workflows must use Docker service names

Common Use Cases

Email Knowledge Base

Automatically ingest support emails and their attachments to build a searchable knowledge base

Gmail to George AI

Custom AI Chat Interface

Build a custom chatbot with your own UI that uses George AI as the knowledge source

RAG Chatbot Experiment

Document Triage

Automatically classify and route incoming documents based on their content

Custom Workflow

Multi-Source Aggregation

Combine documents from Gmail, Dropbox, SharePoint, and other sources into one library

Multiple Workflows

Troubleshooting

401 Unauthorized Error
  • Verify API key has Bearer prefix (with space)
  • Check that API key is valid for the specified library
  • Ensure you're using http://app:3003/* not localhost
  • Confirm credentials are assigned to all HTTP Request nodes
Gmail Not Fetching Emails
  • Re-authenticate Gmail OAuth2 connection
  • Enable Gmail API in Google Cloud Console
  • Check Gmail quota limits
  • Verify trigger is activated
Files Not Appearing in Library
  • Verify George AI is running (pnpm dev)
  • Check backend logs for processing errors
  • Ensure library ID in workflow matches your library
  • Wait for files to be processed (check processing queue)
Ollama Not Responding
  • Check if model is pulled: docker exec gai-ollama ollama list
  • Pull model if needed: docker exec gai-ollama ollama pull llama3.1:8b
  • Verify Ollama URL uses service name: http://gai-ollama:11434
  • Ensure Ollama container is running
Agent Not Using Tools
  • Make tool descriptions clear and specific
  • Try more explicit questions
  • Review execution log to see agent's reasoning
  • Try a different LLM model (some are better at tool usage)
  • Check that George AI has processed content in the library

Best Practices

Do

  • Generate separate API keys per workflow for better security
  • Test workflows with manual trigger before enabling automatic polling
  • Use Docker service names (app:3003) not localhost
  • Monitor execution logs to debug issues
  • Start with small batch sizes when testing bulk imports
  • Revoke unused API keys immediately

Don't

  • Commit API keys to version control
  • Use localhost URLs in n8n workflows (use service names)
  • Forget the Bearer prefix in API key
  • Import thousands of emails without testing first
  • Enable workflows before verifying configuration
  • Ignore duplicate detection - it prevents redundant processing
George-Cloud