Upload API
Two-step file upload process for browser uploads, n8n workflows, crawlers, and integrations
Overview
George AI uses a two-step upload process that provides flexibility for different upload scenarios while maintaining security and consistency. This same process is used across all upload methods: browser file uploads, n8n automation workflows, crawlers, and Google Drive integrations.
Secure
Authenticated uploads with time-limited tokens and validation
Flexible
Supports binary and base64 encoding for different use cases
Universal
Used by browser, n8n, crawlers, and Google Drive integrations
The Two-Step Upload Process
Prepare Upload (GraphQL Mutation)
Register the file metadata and receive a unique upload token (File ID)
mutation {
prepareFileUpload(data: {
libraryId: "library-id-here"
name: "document.pdf"
mimeType: "application/pdf"
size: 1048576
originUri: "desktop"
originModificationDate: "2025-01-15T10:30:00Z"
}) {
id # ← This is your upload token!
}
} What happens in Step 1:
- File record created in database
- Unique file ID generated
- Metadata validated and stored
- Upload directory prepared
| Field | Type | Description | Required |
|---|---|---|---|
libraryId | String | Target library ID | Yes |
name | String | Original filename | Yes |
mimeType | String | File MIME type | Yes |
size | Int | File size in bytes | Yes |
originUri | String | Source (desktop, url, crawler name) | Yes |
originModificationDate | DateTime | Original file modification time | Yes |
Upload File Content (HTTP POST)
Send the actual file data to the upload endpoint using the token from Step 1
POST http://localhost:3003/upload
Content-Type: application/octet-stream
x-upload-token: {fileId from Step 1}
x-user-jwt: {your JWT token}
Authorization: ApiKey {your-api-key}
{binary file content} What happens in Step 2:
- Upload token validated (5-minute expiry)
- File written to disk at correct location
- Upload marked as finished
- Processing task automatically created
Required Headers
| Header | Value | Purpose |
|---|---|---|
x-upload-token | File ID from Step 1 | Identifies which file this upload is for |
x-user-jwt | User JWT token | Authentication (user context) |
Authorization | ApiKey <your-key> or Bearer <your-token> | API authentication |
Content-Encoding | base64 (optional) | If sending base64-encoded data |
Important Notes:
- Upload token expires after 5 minutes
- Token can only be used once
- DO NOT use
multipart/form-data- raw binary or base64 only - Upload is locked during processing to prevent race conditions
Upload Methods & Use Cases
Browser File Upload
The standard file upload used in the George AI web interface
// Step 1: Prepare (TanStack Server Function)
const uploadInfo = await prepareDesktopFileUploadsFn({
data: {
libraryId: 'library-id',
files: [{
name: file.name,
type: file.type,
size: file.size,
lastModified: new Date(file.lastModified)
}]
}
})
// Step 2: Upload with fetch
const response = await fetch(uploadInfo[0].uploadUrl, {
method: 'POST',
headers: {
'x-upload-token': uploadInfo[0].fileId,
'x-user-jwt': userToken,
'Authorization': `ApiKey ${apiKey}`
},
body: file // Raw File object
}) n8n Gmail Attachment Workflow
Automatically upload email attachments from Gmail to George AI
Workflow Nodes:
- Gmail Trigger: Watch for new emails with attachments
- Extract Attachments: Get attachment metadata and content
- GraphQL Request (Step 1): Call prepareFileUpload mutation
- HTTP Request (Step 2): Upload attachment to /upload endpoint
// Node 3: GraphQL Request (Prepare Upload)
POST http://localhost:3003/graphql
Headers:
x-api-key: your-api-key
Body:
{
"query": "mutation($file: AiLibraryFileInput!) { prepareFileUpload(data: $file) { id } }",
"variables": {
"file": {
"libraryId": "{{$json.libraryId}}",
"name": "{{$json.attachment.filename}}",
"mimeType": "{{$json.attachment.mimeType}}",
"size": {{$json.attachment.size}},
"originUri": "gmail:{{$json.email.id}}",
"originModificationDate": "{{$json.email.date}}"
}
}
}
// Node 4: HTTP Request (Upload File)
POST http://localhost:3003/upload
Headers:
x-upload-token: {{$node["GraphQL Request"].json.data.prepareFileUpload.id}}
x-api-key: your-api-key
Content-Encoding: base64
Body:
{{$json.attachment.data}} # Base64 from Gmail Gmail Attachments are Base64
Gmail API returns attachments as base64-encoded strings. Set Content-Encoding: base64 header and send the data directly.
Crawler Uploads
SharePoint, SMB, HTTP, and Box crawlers use the same 2-step process
How Crawlers Upload:
- Crawler discovers file on remote source
- Step 1: prepareFileUpload with file metadata
- Download file from remote source (buffer in memory)
- Step 2: POST buffer to /upload endpoint
- All server-side, no browser involved
Google Drive Upload
Similar to crawlers but triggered by user action in browser
How Google Drive Upload Works:
- User selects files from Google Drive picker (browser UI)
- TanStack Server Function handles download server-side
- Step 1: prepareFileUpload (server-side)
- Download from Google Drive API (server-side)
- Step 2: POST to /upload (server-side)
No Browser Upload
Even though the user selects files in the browser, the actual download and upload happens server-side via TanStack Server Functions. This avoids browser CORS issues and bandwidth limitations.
Error Handling
Common Upload Errors
| Status | Error | Cause | Solution |
|---|---|---|---|
400 | x-upload-token header is required | Missing upload token | Include x-upload-token header |
400 | x-upload-token outdated | Token already used | Start over with new prepareFileUpload |
400 | x-upload-token expired | More than 5 minutes passed | Prepare new upload token |
400 | Multipart form data not supported | Wrong content type | Send raw binary or base64 |
401 | Unauthorized | Missing/invalid auth | Check JWT token and API key |
405 | Method Not Allowed | Not using POST | Use POST method |
409 | File upload already in progress | Duplicate upload attempt | Wait for current upload to finish |
Canceling an Upload
If you need to cancel an upload before Step 2 or if Step 2 fails:
mutation {
cancelFileUpload(fileId: "file-id-from-step-1")
} This removes the file record and cleans up any partial upload data.
Complete Examples
# Step 1: Prepare Upload
curl -X POST http://localhost:3003/graphql \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key" \
-d '{
"query": "mutation($file: AiLibraryFileInput!) { prepareFileUpload(data: $file) { id } }",
"variables": {
"file": {
"libraryId": "lib123",
"name": "document.pdf",
"mimeType": "application/pdf",
"size": 1048576,
"originUri": "desktop",
"originModificationDate": "2025-01-15T10:30:00Z"
}
}
}'
# Response: {"data": {"prepareFileUpload": {"id": "file-abc123"}}}
# Step 2: Upload File
curl -X POST http://localhost:3003/upload \
-H "x-upload-token: file-abc123" \
-H "x-user-jwt: your-jwt-token" \
-H "Authorization: ApiKey your-api-key" \
--data-binary @document.pdf
# Response: {"status": "success"} import requests
import base64
from datetime import datetime
# Read file and encode to base64
with open('document.pdf', 'rb') as f:
file_content = f.read()
base64_content = base64.b64encode(file_content).decode('utf-8')
# Step 1: Prepare Upload
prepare_response = requests.post(
'http://localhost:3003/graphql',
headers={'x-api-key': 'your-api-key'},
json={
'query': '''
mutation($file: AiLibraryFileInput!) {
prepareFileUpload(data: $file) { id }
}
''',
'variables': {
'file': {
'libraryId': 'lib123',
'name': 'document.pdf',
'mimeType': 'application/pdf',
'size': len(file_content),
'originUri': 'python-script',
'originModificationDate': datetime.now().isoformat()
}
}
}
)
file_id = prepare_response.json()['data']['prepareFileUpload']['id']
# Step 2: Upload File (Base64)
upload_response = requests.post(
'http://localhost:3003/upload',
headers={
'x-upload-token': file_id,
'x-api-key': 'your-api-key',
'Content-Encoding': 'base64'
},
data=base64_content
)
print(upload_response.json()) import fs from 'fs'
import fetch from 'node-fetch'
const apiKey = 'your-api-key'
const userJwt = 'your-jwt-token'
const libraryId = 'lib123'
const filePath = './document.pdf'
// Read file
const fileBuffer = fs.readFileSync(filePath)
const fileStats = fs.statSync(filePath)
// Step 1: Prepare Upload
const prepareResponse = await fetch('http://localhost:3003/graphql', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': apiKey
},
body: JSON.stringify({
query: `
mutation($file: AiLibraryFileInput!) {
prepareFileUpload(data: $file) { id }
}
`,
variables: {
file: {
libraryId,
name: 'document.pdf',
mimeType: 'application/pdf',
size: fileStats.size,
originUri: 'nodejs-script',
originModificationDate: fileStats.mtime.toISOString()
}
}
})
})
const { data } = await prepareResponse.json()
const fileId = data.prepareFileUpload.id
// Step 2: Upload File
const uploadResponse = await fetch('http://localhost:3003/upload', {
method: 'POST',
headers: {
'x-upload-token': fileId,
'x-user-jwt': userJwt,
'Authorization': `ApiKey ${apiKey}`
},
body: fileBuffer
})
const result = await uploadResponse.json()
console.log(result) // { status: 'success' }