Skip to content

jgchoti/jgchoti-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

94 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Choti Portfolio API β€” RAG Chatbot (Gemini)

RAG-powered serverless API that answers questions about me and my work. It uses a lightweight in-memory vector store loaded from precomputed embeddings and Google Gemini for both embeddings and generation.

Vercel Node.js Google Gemini Express.js


The Story Behind This API

Because "Please Please Please" Don't Give Me Generic Responses

pleasepleaseplease

Just like Sabrina sings about wanting something real β€” this API delivers context-rich answers about my portfolio instead of the same old chatbot nonsense. 🎡

✨ Features

  • πŸ€– RAG answers: Retrieves relevant snippets from contact details/profile/project documents
  • 🧠 Gemini models: text-embedding-004 for search, configurable chat model for generation
  • πŸš€ Simple API: POST /api/chat-rag with optional conversation history
  • ☁️ Serverless ready: Vercel functions with permissive CORS headers
  • πŸ” Vector similarity search: Cosine similarity-based document retrieval
  • πŸ›‘οΈ Health monitoring: Built-in health check endpoints

πŸ› οΈ Tech Stack

  • Runtime: Node.js 18+ (ES Modules)
  • Framework: Express (for local dev), Vercel Functions (deployment)
  • AI/ML: @google/generative-ai
  • Vector Store: Custom in-memory implementation with Gemini embeddings
  • Deployment: Vercel
  • Database: File-based embeddings storage

πŸ“‹ Prerequisites

Before running this project, make sure you have:

  • Node.js 18+ (recommended)
  • A Google Gemini API key
  • Create .env.local in the project root with:
    GEMINI_API_KEY=your_api_key_here
    # optional, defaults to gemini-2.0-flash-lite
    GEMINI_MODEL=gemini-2.0-flash-lite

πŸš€ Installation

# Clone the repository
git clone https://github.com/jgchoti/jgchoti-api.git
cd jgchoti-api

# Install dependencies
npm install

πŸ’» Running Locally

Two options available:

npm run dev
# -> http://localhost:3000

The dev server logs available endpoints on start.

πŸ“š API Documentation

Base URL

http://localhost:3000/api

Endpoints

GET /api/health

Health check and basic info.

curl -s http://localhost:3000/api/health | jq

Response:

{
  "status": "healthy",
  "message": "Choti's Portfolio API is running!",
  "endpoints": {
    "chat": "/api/chat-rag",
    "health": "/api/health"
  },
  "timestamp": "2025-09-08T17:44:01.845Z"
}

GET /api/

Root API info and usage documentation.

curl -s http://localhost:3000/api/

Response:

{
  "message": "Welcome to Choti's Portfolio API! πŸ€–",
  "description": "RAG-powered chatbot API for portfolio inquiries",
  "endpoints": {
    "chat": "/api/chat-rag - POST - Chat with Choti's AI agent",
    "health": "/api/health - GET - Health check"
  },
  "usage": {
    "method": "POST",
    "url": "/api/chat-rag",
    "body": {
      "message": "Your question here",
      "conversationHistory": "Optional previous messages"
    }
  },
  "portfolio": "https://jgchoti.github.io",
  "github": "https://github.com/jgchoti"
}

POST /api/chat-rag

Chat with the RAG agent about Choti. Returns model text and debug metadata.

Example Request:

curl -s -X POST https://jgchoti-api.vercel.app/api/chat-rag \
  -H 'Content-Type: application/json' \
  -d '{
    "message": "Summarize Choti'\''s international experience.",
    "conversationHistory": []
  }'

Response Structure:

{
  "response": "Choti has a truly global perspective, having lived and worked in nine countries, including Thailand, Switzerland, and the UK. This international experience gives her a unique edge in cross-cultural collaboration and understanding global markets. You can see how she applies this in her data projects here: https://jgchoti.github.io/data. What kind of role are you looking to fill?",
  "metadata": {
    "model": "gemini-2.0-flash-lite",
    "ragEnabled": true,
    "vectorUsed": true,
    "vectorDebugInfo": {
      "resultsUsed": 3,
      "topSimilarity": 0.6907635643942781,
      "types": ["profile", "contact", "profile"],
      "similarities": [
        0.6907635643942781, 0.6208553455982836, 0.5978628425932092
      ]
    },
    "contextLength": 3803,
    "timestamp": "2025-09-08T17:48:34.895Z"
  }
}

Notes:

  • conversationHistory is optional; items should be { type: 'user' | 'assistant', content: string }
  • If embeddings are missing or the vector store is not ready, the endpoint falls back to a small default context

πŸ” Embeddings and RAG

The vector store (lib/HybridVectorStore.js) loads from data/embeddings-gemini.json and uses cosine similarity to retrieve relevant snippets. Query embeddings are computed on-the-fly using Gemini text-embedding-004, so GEMINI_API_KEY is required both to generate embeddings and to serve queries.

πŸ” Dynamic Portfolio Integration

The system automatically enriches portfolio data through:

  • GitHub API integration (scripts/generate-github.js) that discovers and analyzes repositories and content extraction from README files and repository metadata
  • Career path relevance scoring for data engineering, data science, ML engineering, and backend development roles

Generate/Refresh Embeddings

npm run generate-embeddings

This script builds content from:

  • data/profileData.js
  • data/projectData.js
  • data/contactInfo.js
  • data/github_portfolio_data.json

Creates embeddings with Gemini and writes:

  • data/embeddings-gemini.json
  • data/embeddings-gemini-metadata.json

πŸ”§ Environment Variables

Create a .env.local file in the project root:

# Required: Google Gemini API key
GEMINI_API_KEY=your_api_key_here

# Optional: Gemini model for generation (defaults to gemini-2.0-flash-lite)
GEMINI_MODEL=gemini-2.0-flash-lite

For Vercel deployment: Set these in the project's Environment Variables dashboard.

πŸ“ Project Structure

jgchoti-api/
β”œβ”€β”€ api/                      # Serverless endpoints for Vercel
β”‚   β”œβ”€β”€ chat-rag.js          # Main RAG chat endpoint (POST)
β”‚   β”œβ”€β”€ health.js            # Health check (GET)
β”‚   └── index.js             # Root API info (GET)
β”œβ”€β”€ data/                     # Data and generated embeddings
β”‚   β”œβ”€β”€ profileData.js       # Profile information
β”‚   β”œβ”€β”€ projectData.js       # Project data
β”‚   β”œβ”€β”€ contactInfo.js       # Contact information
β”‚   β”œβ”€β”€ embeddings-gemini.json          # Generated embeddings
β”‚   └── embeddings-gemini-metadata.json # Embedding metadata
β”œβ”€β”€ lib/
β”‚   └── HybridVectorStore.js # In-memory vector store using Gemini embeddings
β”œβ”€β”€ scripts/
β”‚   └── generate-embeddings.js # Builds embeddings from profile/projects
β”œβ”€β”€ dev-server.js             # Local Express server that proxies to handlers
β”œβ”€β”€ vercel.json               # Vercel config (headers, rewrites, function settings)
β”œβ”€β”€ package.json              # Scripts and dependencies
β”œβ”€β”€ .env.local                # Environment variables (create this)
└── README.md

πŸš€ Deployment (Vercel)

Step-by-step deployment:

  1. Login and link project:

    vercel login
    vercel link
  2. Set environment variables:

    vercel env add GEMINI_API_KEY
    vercel env add GEMINI_MODEL  # optional
  3. Deploy:

    npm run deploy
    # or
    vercel --prod

Alternative: Deploy via Vercel Dashboard

  1. Connect your GitHub repository to Vercel
  2. Set environment variables in the dashboard
  3. Deploy automatically on git push

πŸ› οΈ Troubleshooting

Common Issues

401/Authentication Error

  • Ensure GEMINI_API_KEY is set correctly in .env.local (local) and Vercel dashboard (production)
  • Verify your Gemini API key is valid and has proper permissions

Vector Store Not Ready

  • Run npm run generate-embeddings to create data/embeddings-gemini.json
  • Ensure all required data files exist in the data/ directory

Port Already in Use

  • Change the port in dev-server.js or kill the process using port 3000
  • Check for other running Node.js processes

Missing Dependencies

  • Run npm install to ensure all dependencies are installed
  • Check Node.js version (18+ required)

πŸ€– Demo

Chat with the deployed bot at: https://jgchoti.github.io/

Professional AI that actually understands context and delivers responses that make sense β€” no generic chatbot nonsense here. ✨

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors