This guide will help you set up the database for server-side chat storage.
- Go to https://supabase.com
- Sign up or log in
- Click "New Project"
- Choose organization and fill in:
- Project name:
mermaid-live-editor - Database password: (generate strong password)
- Region: Choose closest to your users
- Pricing plan: Free tier is sufficient for development
- Project name:
- In your Supabase project, go to SQL Editor
- Click "New Query"
- Copy the contents of
database/schema.sql - Paste and click "Run"
- Verify tables were created in Table Editor
- Go to Project Settings → API
- Copy these values:
- Project URL:
https://xxxxx.supabase.co - anon public key:
eyJhbGc... - service_role key:
eyJhbGc...(keep secret!)
- Project URL:
Add to your .env file:
# Supabase Configuration
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key-here
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key-here
# Chat Configuration
CHAT_MAX_CONVERSATIONS_PER_USER=10
CHAT_MAX_MESSAGES_PER_CONVERSATION=100
CHAT_CLEANUP_DAYS=30pnpm add @supabase/supabase-jsCreate a test file test-db.ts:
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(process.env.SUPABASE_URL!, process.env.SUPABASE_SERVICE_ROLE_KEY!);
async function test() {
const { data, error } = await supabase.from('conversations').select('*').limit(1);
if (error) {
console.error('Error:', error);
} else {
console.log('Success! Found conversations:', data);
}
}
test();Run: tsx test-db.ts
# Start PostgreSQL
docker run --name mermaid-postgres \
-e POSTGRES_PASSWORD=yourpassword \
-e POSTGRES_DB=mermaid_chat \
-p 5432:5432 \
-d postgres:15
# Run schema
docker exec -i mermaid-postgres psql -U postgres -d mermaid_chat < database/schema.sqlDATABASE_URL=postgresql://postgres:yourpassword@localhost:5432/mermaid_chatpnpm add pg
# or
pnpm add @vercel/postgres-
conversations
- Stores chat conversation metadata
- Fields: id, user_id, title, created_at, updated_at, metadata
-
messages
- Stores individual messages
- Fields: id, conversation_id, role, content, parts, created_at, metadata
-
snapshots
- Stores diagram state snapshots
- Fields: id, conversation_id, message_id, description, state, created_at
-
usage_stats
- Tracks token usage and costs
- Fields: id, conversation_id, message_id, model, tokens, cost, created_at
- Automatic timestamps:
created_atandupdated_atmanaged by triggers - Cascade deletes: Deleting a conversation removes all related data
- Indexes: Optimized for common queries
- Constraints: Data validation at database level
- Functions: Built-in cleanup and statistics functions
-- Delete conversations older than 30 days
SELECT * FROM cleanup_old_conversations(30);-- Get stats for a conversation
SELECT * FROM get_conversation_stats('conversation-uuid-here');
-- View database size
SELECT pg_size_pretty(pg_database_size(current_database()));
-- View table sizes
SELECT
tablename,
pg_size_pretty(pg_total_relation_size('public.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size('public.'||tablename) DESC;Create a Supabase Edge Function:
// supabase/functions/cleanup/index.ts
import { createClient } from '@supabase/supabase-js';
Deno.serve(async () => {
const supabase = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
);
const { data, error } = await supabase.rpc('cleanup_old_conversations', { days_to_keep: 30 });
if (error) {
return new Response(JSON.stringify({ error: error.message }), {
status: 500,
headers: { 'Content-Type': 'application/json' }
});
}
return new Response(JSON.stringify({ success: true, ...data }), {
headers: { 'Content-Type': 'application/json' }
});
});Schedule via cron: 0 0 * * * (daily at midnight)
The schema includes RLS policies. Currently set to allow all operations.
When you add authentication:
- Uncomment the auth-based policies in
schema.sql - Update policies to use
auth.uid() - Test with different users
- Never expose service_role key to client
- Use anon key for client-side operations
- Validate input at API level (already done)
- Rate limit database operations
- Monitor query performance
- Backup regularly (Supabase does this automatically)
- Database: View tables, run queries
- API: Monitor API usage
- Logs: View real-time logs
- Performance: Query performance insights
-- Slow queries
SELECT
query,
calls,
total_time,
mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Active connections
SELECT count(*) FROM pg_stat_activity;
-- Table bloat
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size,
n_live_tup,
n_dead_tup
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;// Test connection
const { data, error } = await supabase.from('conversations').select('count');
if (error) {
console.error('Connection error:', error);
// Check: URL, API keys, network, firewall
}-- Check if tables exist
SELECT tablename FROM pg_tables WHERE schemaname = 'public';
-- Check if functions exist
SELECT proname FROM pg_proc WHERE proname LIKE 'cleanup%';
-- Drop and recreate if needed
DROP TABLE IF EXISTS conversations CASCADE;
-- Then re-run schema.sql-- Rebuild indexes
REINDEX TABLE conversations;
REINDEX TABLE messages;
-- Vacuum tables
VACUUM ANALYZE conversations;
VACUUM ANALYZE messages;
-- Update statistics
ANALYZE;- Database: 500MB
- Bandwidth: 2GB/month
- API requests: Unlimited
- Average conversation: ~50KB (50 messages × 1KB)
- 500MB = ~10,000 conversations
- With 30-day cleanup: sustainable for moderate usage
- Database: 8GB
- Bandwidth: 50GB/month
- Backups: Point-in-time recovery
- Support: Email support
- ✅ Set up database (you are here)
- ⏭️ Create API endpoints (
/src/routes/api/chat/conversations/) - ⏭️ Create database client (
/src/lib/server/db.ts) - ⏭️ Update Chat component to use API
- ⏭️ Add conversation management UI
- ⏭️ Test and deploy