AI coding agents like Claude Code and OpenAI Codex build up project memory over time — your coding conventions, architecture decisions, feedback on past approaches, and context about ongoing work. This memory is what makes the agent increasingly useful the longer you work with it.
The problem? That memory lives on your local filesystem. Switch machines, reformat your laptop, or onboard a teammate, and it's gone. The solution is simple: sync your agent memory to S3-compatible object storage.
This guide shows you how to use DanubeData's S3-compatible storage (or any S3 provider) to keep your AI coding agent memory persistent, backed up, and shareable across machines.
Why S3 for AI Agent Memory?
AI coding agents store their memory as plain files — markdown, JSON, YAML. This makes them a perfect fit for object storage:
- Durability: S3 storage is designed for 99.999999999% (11 nines) durability — your memory won't disappear
- Versioning: S3 versioning means you can roll back memory to any previous state
- Cross-machine sync: Pull your memory on any machine with a single command
- Team sharing: Share project-level memory across your entire team via a shared bucket
- Cost: AI agent memory is tiny (typically under 1 MB) — storage cost is effectively zero
How AI Coding Agents Store Memory
Claude Code
Claude Code stores memory in two locations:
# Global memory (your preferences, role, feedback)
~/.claude/
# Project-specific memory (architecture, conventions, ongoing work)
your-project/.claude/
Inside these directories, Claude Code maintains:
- CLAUDE.md — Project instructions and conventions
- memory/MEMORY.md — Index of all memory entries
- memory/*.md — Individual memory files (user preferences, feedback, project context, references)
OpenAI Codex
Codex uses a similar pattern:
# Project instructions
your-project/AGENTS.md
# Codex configuration
your-project/.codex/
Other Agents
Most AI coding agents follow the same pattern — a dotfile directory with markdown or JSON files. The sync approach below works for all of them.
Setting Up S3 Sync with rclone
rclone is the best tool for syncing files to S3-compatible storage. It supports every S3 provider and handles incremental syncs efficiently.
Step 1: Install rclone
# macOS
brew install rclone
# Linux
curl https://rclone.org/install.sh | sudo bash
# Windows
winget install Rclone.Rclone
Step 2: Configure Your S3 Connection
Create a DanubeData S3 remote. You'll need your access key and secret key from the DanubeData dashboard under Object Storage > Access Keys.
rclone config
# Choose: n (new remote)
# Name: danubedata
# Storage: s3
# Provider: Other
# Access Key: your-access-key
# Secret Key: your-secret-key
# Endpoint: s3.danubedata.ro
# Region: fsn1
Or create the config directly:
# ~/.config/rclone/rclone.conf
[danubedata]
type = s3
provider = Other
access_key_id = your-access-key
secret_access_key = your-secret-key
endpoint = s3.danubedata.ro
region = fsn1
Step 3: Create a Bucket for Agent Memory
# Create a dedicated bucket
rclone mkdir danubedata:ai-agent-memory
# Enable versioning for rollback capability (via DanubeData dashboard)
Step 4: Sync Your Memory
# Push Claude Code project memory to S3
rclone sync .claude/ danubedata:ai-agent-memory/projects/my-project/.claude/
--exclude "*.lock"
--exclude "statsig/"
-v
# Push global Claude Code memory
rclone sync ~/.claude/ danubedata:ai-agent-memory/global/.claude/
--exclude "*.lock"
--exclude "statsig/"
--exclude "credentials"
--exclude "auth.*"
-v
# Pull memory to a new machine
rclone sync danubedata:ai-agent-memory/projects/my-project/.claude/ .claude/ -v
Important: Always exclude credentials, auth tokens, and lock files from sync. These are machine-specific and should never be stored in S3.
Automating the Sync
Manual syncs are easy to forget. Here are three ways to automate it.
Option 1: Git Hook (Recommended)
Sync memory every time you commit code:
#!/bin/bash
# .git/hooks/post-commit
BUCKET="danubedata:ai-agent-memory"
PROJECT=$(basename "$(git rev-parse --show-toplevel)")
# Sync project memory after each commit
if [ -d ".claude" ]; then
rclone sync .claude/ "$BUCKET/projects/$PROJECT/.claude/"
--exclude "*.lock"
--exclude "statsig/"
--quiet
fi
Option 2: Cron Job
Sync every 30 minutes in the background:
# crontab -e
*/30 * * * * rclone sync /path/to/project/.claude/ danubedata:ai-agent-memory/projects/my-project/.claude/ --exclude "*.lock" --exclude "statsig/" --quiet 2>&1 | logger -t ai-memory-sync
Option 3: Claude Code Hook
Claude Code supports hooks that run after specific events. Add a post-session hook to sync automatically:
// .claude/settings.json
{
"hooks": {
"stop": [
{
"command": "rclone sync .claude/ danubedata:ai-agent-memory/projects/$(basename $(pwd))/.claude/ --exclude *.lock --exclude statsig/ --quiet"
}
]
}
}
Team Memory Sharing
One of the most powerful use cases is sharing project-level AI memory across your team. When one developer teaches Claude about a codebase convention, everyone benefits.
Shared Bucket Structure
ai-agent-memory/
├── global/ # Personal (per developer)
│ ├── adrian/.claude/
│ └── maria/.claude/
├── projects/ # Shared project memory
│ ├── api-backend/.claude/
│ ├── web-frontend/.claude/
│ └── mobile-app/.claude/
└── team/ # Team-wide conventions
└── CLAUDE.md
Pull Team Memory on Clone
Add a setup script to your project:
#!/bin/bash
# scripts/setup-ai-memory.sh
BUCKET="danubedata:ai-agent-memory"
PROJECT=$(basename "$(pwd)")
echo "Pulling AI agent memory for $PROJECT..."
rclone sync "$BUCKET/projects/$PROJECT/.claude/" .claude/
--exclude "*.lock"
--exclude "statsig/"
-v
echo "Done. Claude Code will use the shared project memory."
Advanced: S3 as a Knowledge Base for RAG
Beyond syncing agent memory files, S3 can serve as the document store for a full Retrieval-Augmented Generation (RAG) pipeline. Store your documentation, design docs, and runbooks in S3, then query them with an LLM.
import boto3
from langchain_community.document_loaders import S3DirectoryLoader
# Connect to DanubeData S3
s3 = boto3.client(
"s3",
endpoint_url="https://s3.danubedata.ro",
aws_access_key_id="your-key",
aws_secret_access_key="your-secret",
region_name="fsn1",
)
# Load docs from S3 for RAG
loader = S3DirectoryLoader(
bucket="team-knowledge-base",
endpoint_url="https://s3.danubedata.ro",
)
documents = loader.load()
# Feed into your embedding pipeline...
We cover this in depth in our companion post: Building a RAG Knowledge Base with S3-Compatible Storage in Europe.
S3 Versioning: Time-Travel for Your AI Memory
Enable S3 versioning on your bucket and you get automatic snapshots of every memory change. This is useful when:
- An agent writes incorrect memory and you need to roll back
- You want to see how your project context evolved over time
- A teammate accidentally overwrites shared memory
# List memory versions
rclone ls danubedata:ai-agent-memory/projects/api/.claude/memory/MEMORY.md
--s3-versions
# Restore a specific version
rclone copyto
"danubedata:ai-agent-memory/projects/api/.claude/memory/MEMORY.md?versionId=abc123"
.claude/memory/MEMORY.md
Cost Breakdown
AI agent memory is extremely lightweight. Here's what it actually costs:
| Scenario | Storage Used | Monthly Cost |
|---|---|---|
| Single developer, 5 projects | ~500 KB | Included in base plan |
| Team of 10, 20 projects | ~10 MB | Included in base plan |
| RAG knowledge base (docs + embeddings) | 1-50 GB | Included in base plan (up to 1 TB) |
DanubeData's Object Storage starts at €3.99/month with 1 TB of storage and 1 TB of traffic included — more than enough for AI memory and knowledge bases combined.
Security Considerations
When syncing AI agent memory to S3, keep these security practices in mind:
- Never sync credentials: Exclude
auth.*,credentials, API keys, and tokens - Use bucket policies: Restrict access to authorized team members only
- Enable versioning: Protects against accidental deletion or overwrite
- Encrypt at rest: DanubeData S3 supports server-side encryption
- Audit access: Review bucket access logs periodically
- GDPR compliance: DanubeData stores data in Germany (Falkenstein), fully GDPR compliant — important if your memory contains customer-related context
Quick Reference: Essential Commands
# Initial setup
rclone config # Configure DanubeData S3 remote
rclone mkdir danubedata:ai-agent-memory
# Push memory to S3
rclone sync .claude/ danubedata:ai-agent-memory/projects/$(basename $(pwd))/.claude/
--exclude "*.lock" --exclude "statsig/" --exclude "credentials" -v
# Pull memory from S3
rclone sync danubedata:ai-agent-memory/projects/$(basename $(pwd))/.claude/ .claude/ -v
# Check what would change (dry run)
rclone sync .claude/ danubedata:ai-agent-memory/projects/$(basename $(pwd))/.claude/
--exclude "*.lock" --exclude "statsig/" --dry-run
# List all stored project memories
rclone lsd danubedata:ai-agent-memory/projects/
Conclusion
Your AI coding agent gets smarter the more you use it — but only if that memory persists. By syncing to S3-compatible storage, you get:
- Persistence: Memory survives machine changes, reformats, and disasters
- Portability: Pull your memory on any machine in seconds
- Collaboration: Share project context across your entire team
- History: Version control for your AI's knowledge with S3 versioning
- Peace of mind: 11-nines durability means your memory is safer in S3 than on your laptop
The best part? It takes 5 minutes to set up and costs effectively nothing. Your AI agent memory is measured in kilobytes — the storage is practically free.
Get started: Create a DanubeData account, grab an Object Storage bucket, and never lose your AI coding context again.
Next up: Building a RAG Knowledge Base with S3-Compatible Storage in Europe