A scalable, production-ready cloud storage solution built with modern web technologies. VaultQ provides secure file storage, sharing capabilities, and Google OAuth authentication with AWS S3 integration.
- Google OAuth Authentication - Secure login using passport-google-oauth20
- File Management - Upload, download, rename, and delete files with ease
- Smart Search - Fast file search with indexed MongoDB queries
- File Sharing - Share files publicly or with specific users with granular permissions (read/write)
- Access Control - Owner-based permissions with public/private file settings
- Presigned URLs - Secure direct uploads and downloads using AWS S3 presigned URLs
- Rate Limiting - Protection against abuse on critical endpoints
- File Size Limits - Multi-layer validation (client, server, S3) for upload size control
- Modern UI - Beautiful, responsive interface built with Next.js and Tailwind CSS
- Real-time Updates - React Query for efficient data fetching and caching
- Starred Files - Mark important files for quick access
- Trash Management - Soft delete with restore capabilities
- Recent Files - Quick access to recently modified files
- File Preview - In-browser preview for images and PDFs
- Storage Limits - Per-user storage quota (default 100MB) with real-time usage tracking and upload prevention when limit is exceeded
Frontend:
- Next.js 16 with React 19
- TypeScript
- Tailwind CSS
- React Query (TanStack Query)
- Lucide React Icons
Backend:
- Node.js with Express
- TypeScript
- MongoDB with Mongoose
- Passport.js for authentication
- AWS S3 for file storage
- Express Session with MongoDB store
- Node.js (v18 or higher)
- MongoDB (local or Atlas)
- AWS Account with S3 bucket
- Google OAuth credentials
git clone https://github.com/definitelynotchirag/vaultq
cd vaultqcd backend
npm installCreate a .env file in the backend directory:
# Server
PORT=3000
NODE_ENV=development
# MongoDB
MONGODB_URI=mongodb://localhost:27017/vaultq
# Session
SESSION_SECRET=your-super-secret-session-key-change-this
# Google OAuth
GOOGLE_CLIENT_ID=your-google-client-id
GOOGLE_CLIENT_SECRET=your-google-client-secret
# AWS S3
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-aws-access-key
AWS_SECRET_ACCESS_KEY=your-aws-secret-key
S3_BUCKET_NAME=your-bucket-name
# CORS
FRONTEND_URL=http://localhost:3001Start the backend server:
npm run devThe backend will run on http://localhost:3000
cd frontend
npm installCreate a .env file in the frontend directory:
NEXT_PUBLIC_API_URL=http://localhost:3000Start the frontend development server:
npm run devThe frontend will run on http://localhost:3001
- Go to Google Cloud Console
- Create a new project or select an existing one
- Enable Google+ API
- Create OAuth 2.0 credentials
- Add authorized redirect URI:
http://localhost:3000/auth/google/callback - Copy the Client ID and Client Secret to your backend
.envfile
- Create an S3 bucket in your AWS account
- Create an IAM user with S3 access
- Generate access keys for the IAM user
- Configure CORS for your S3 bucket (run the setup script):
cd backend
npm run setup:s3-cors- Docker
- Docker Compose
git clone https://github.com/definitelynotchirag/vaultq
cd vaultqCreate a .env file in the backend directory with all the required environment variables (see the local setup section for details).
docker-compose up --buildThis will:
- Build the backend Docker image
- Start the backend service on port 3000
- Set up networking between services
The backend will be available at http://localhost:3000
If you want to run the frontend in Docker as well, you can build it separately:
cd frontend
docker build -t vaultq-frontend .
docker run -p 3001:3001 vaultq-frontenddocker-compose downTo remove volumes as well:
docker-compose down -vClient
|
| Google OAuth login, file CRUD requests, search, sharing, presigned URLs
v
Backend (Node Express)
- Google OAuth (passport-google-oauth20)
- Session authentication protecting all file routes
- File operations: upload url, confirm upload, list, search, rename, delete
- Sharing logic: public flag or sharedWith user list
- File size limits: client check + multer + S3 content length rules
- Rate limiting for key endpoints
|
v
MongoDB
- users collection: googleId, email, name, storageLimit (default 100MB)
- files collection: owner, originalName, storageName, size, timestamps, public flag, permissions array
- text index on originalName for fast searching
- storage calculation includes all files (active + trash) until permanent deletion
|
v
S3 Bucket
- actual file storage
- direct uploads using presigned URLs from backend
- presigned short-lived download URLs for access-controlled reads
GET /auth/google- Initiate Google OAuth flowGET /auth/google/callback- OAuth callbackGET /auth/me- Get current user infoPOST /auth/logout- Logout userGET /auth/success- Redirect to success pageGET /auth/failure- Redirect to failure page
POST /files/upload-url- Get presigned upload URL from S3POST /files/confirm-upload- Confirm upload and save metadataGET /files- List accessible files (supports ?search= query)PUT /files/:id- Rename fileDELETE /files/:id- Move file to trash (soft delete)GET /files/:id/download- Get presigned download URLGET /files/:id/view- Get presigned view URL for in-browser preview
POST /files/:id/share- Share file with specific user by userIdPOST /files/:id/share-email- Share file with user by emailPOST /files/:id/public- Make file publicPOST /files/:id/private- Make file private
POST /files/:id/star- Star a fileDELETE /files/:id/star- Unstar a fileGET /files/starred- List starred files (supports ?search= query)
GET /files/trash- List files in trashPOST /files/:id/restore- Restore file from trashDELETE /files/:id/permanent- Permanently delete file from S3 and DB
GET /files/storage- Get user storage usage, limit, and available space
GET /shared/:fileId- Get public or shared file detailsGET /shared/:fileId/view- Get view URL for public/shared fileGET /shared/:fileId/download- Get download URL for public/shared file
- Presigned URL Pattern - Demonstrates understanding of scalable file handling without routing files through the backend
- Proper Access Control - Scoped permissions instead of naive public links
- Indexed Search - Prevents full collection scans for better performance
- Rate Limiting - Shows security awareness and production readiness
- Multi-layer Validation - File size limits at client, server, and S3 levels
- Storage Quota Management - Per-user storage limits with automatic enforcement and real-time usage tracking
- Modern Stack - Latest versions of Next.js, React, and TypeScript
- Production Ready - Dockerized, with proper error handling and health checks
MIT
