n8n Integration
Connect George AI with n8n to automate document ingestion, build AI agents, and create powerful workflows
What is n8n Integration?
n8n is a workflow automation tool that allows you to connect George AI with hundreds of other services. George AI provides pre-built workflow templates for common integration patterns, making it easy to automate document ingestion and build custom AI agents.
Automated Document Ingestion
Automatically ingest emails, attachments, and files from external sources into George AI libraries
AI Agent Experiments
Build and test RAG-powered AI agents that can use George AI's data through custom tools
Prerequisites
1. George AI Running
pnpm dev Server running on http://localhost:3003 2. n8n Running
npx n8n n8n ready on http://localhost:5678 3. Create Library
Create a library in George AI where documents will be ingested
4. Generate API Key
Generate an API key for your library to authenticate n8n requests
Docker Networking Important
n8n runs in Docker and must use service names, not localhost. Use http://app:3003/graphql not http://localhost:3003/graphql
Available Workflow Templates
Gmail to George AI
Automatically ingest Gmail emails and attachments into George AI libraries
Features:
- Import
gmail-to-george-ai.jsoninto n8n - Configure George AI API Key credential (Header Auth):
- Name:
Authorization - Value:
Bearer YOUR_API_KEY
- Name:
- Configure Gmail OAuth2 credential
- Edit "Set Configuration" node with your libraryId and URLs
- Activate workflow for automatic polling or use manual trigger for testing
RAG Chatbot Experiment
AI Agent-powered chatbot for prototyping RAG patterns with George AI tools
Features:
- Import
rag-chatbot-experiment.jsoninto n8n - Configure George AI API Key credential
- Edit "Set Configuration" node:
- libraryId: Your library ID
- graphqlUrl:
http://app:3003/graphql - ollamaUrl:
http://gai-ollama:11434 - chatModel: Your Ollama model (e.g.,
llama3.1:8b)
- Test the chat interface in n8n
Configuration Guide
API Key Configuration
In n8n, create a new Header Auth credential:
Credential Type: Header Auth Name: Authorization Value: Bearer YOUR_API_KEY_HERE Bearer prefix with a space is required! Docker Service URLs
| Service | Correct (from n8n) | Wrong |
|---|---|---|
| George AI GraphQL | http://app:3003/graphql | http://localhost:3003/graphql |
| George AI Upload | http://app:3003/upload | http://localhost:3003/upload |
| Ollama | http://gai-ollama:11434 | http://localhost:11434 |
Common Use Cases
Email Knowledge Base
Automatically ingest support emails and their attachments to build a searchable knowledge base
Custom AI Chat Interface
Build a custom chatbot with your own UI that uses George AI as the knowledge source
Document Triage
Automatically classify and route incoming documents based on their content
Multi-Source Aggregation
Combine documents from Gmail, Dropbox, SharePoint, and other sources into one library
Troubleshooting
- Verify API key has
Bearerprefix (with space) - Check that API key is valid for the specified library
- Ensure you're using
http://app:3003/*notlocalhost - Confirm credentials are assigned to all HTTP Request nodes
- Re-authenticate Gmail OAuth2 connection
- Enable Gmail API in Google Cloud Console
- Check Gmail quota limits
- Verify trigger is activated
- Verify George AI is running (
pnpm dev) - Check backend logs for processing errors
- Ensure library ID in workflow matches your library
- Wait for files to be processed (check processing queue)
- Check if model is pulled:
docker exec gai-ollama ollama list - Pull model if needed:
docker exec gai-ollama ollama pull llama3.1:8b - Verify Ollama URL uses service name:
http://gai-ollama:11434 - Ensure Ollama container is running
- Make tool descriptions clear and specific
- Try more explicit questions
- Review execution log to see agent's reasoning
- Try a different LLM model (some are better at tool usage)
- Check that George AI has processed content in the library
Best Practices
Do
- ✓ Generate separate API keys per workflow for better security
- ✓ Test workflows with manual trigger before enabling automatic polling
- ✓ Use Docker service names (app:3003) not localhost
- ✓ Monitor execution logs to debug issues
- ✓ Start with small batch sizes when testing bulk imports
- ✓ Revoke unused API keys immediately
Don't
- ✗ Commit API keys to version control
- ✗ Use localhost URLs in n8n workflows (use service names)
- ✗ Forget the
Bearerprefix in API key - ✗ Import thousands of emails without testing first
- ✗ Enable workflows before verifying configuration
- ✗ Ignore duplicate detection - it prevents redundant processing