An intelligent automation system that scrapes job postings, tailors resumes using AI, and automates job application submissions.
AI Job Agent is a FastAPI-based application that leverages LangChain, OpenAI, and Playwright to streamline the job application process. It consists of three core services that work together to automate job hunting:
- Job Scraper: Extracts job details from various platforms (LinkedIn, Indeed, etc.)
- Resume Tailoring: Uses LLM to customize resumes for specific job descriptions
- Application Submitter: Automates form filling and submission (LinkedIn Easy Apply)
- π Intelligent Job Scraping: Playwright-based scrapers for dynamic content
- π€ AI-Powered Resume Customization: LangChain + OpenAI GPT-3.5 integration
- π Automatic Application Submission: Browser automation with Playwright
- πΎ Database Persistence: SQLAlchemy with PostgreSQL/SQLite support
- π RESTful API: FastAPI with automatic interactive documentation
- π‘οΈ Robust Error Handling: Graceful degradation when services unavailable
- π§ͺ Comprehensive Testing: Unit and integration tests with pytest
- FastAPI: Modern async web framework
- LangChain: LLM orchestration framework
- OpenAI API: GPT-3.5-turbo for resume tailoring
- Playwright: Browser automation for scraping/submission
- SQLAlchemy: ORM for database operations
- PostgreSQL: Production database (recommended)
- SQLite: Development/testing database
- BeautifulSoup4: HTML parsing
- Pydantic: Data validation
- Uvicorn: ASGI server
- Python 3.10+
- PostgreSQL (optional, SQLite works for development)
- OpenAI API key
- Clone the repository
git clone <repository-url>
cd ai-job-agent- Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies
pip install -r requirements.txt- Install Playwright browsers
playwright install chromium- Configure environment variables
cp .env.example .env
# Edit .env with your credentialsRequired environment variables:
DATABASE_URL=postgresql://user:password@localhost/dbname
OPENAI_API_KEY=sk-your-openai-api-key- Initialize database
# Database tables are created automatically on first run
# Or manually with:
python -c "from app.database import engine, Base; from app.models import *; Base.metadata.create_all(bind=engine)"Development:
uvicorn app.main:app --reloadProduction:
uvicorn app.main:app --host 0.0.0.0 --port 8000Once running, visit:
- Interactive Docs: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
POST /jobs/scrape
Content-Type: application/json
{
"url": "https://www.linkedin.com/jobs/view/123456"
}Response:
{
"id": 1,
"title": "Senior Software Engineer",
"company": "Tech Corp",
"description": "Job description...",
"url": "https://...",
"source": "linkedin",
"created_at": "2025-11-24T12:00:00"
}POST /resumes/tailor
Content-Type: application/json
{
"base_resume": "Your base resume content...",
"job_description": "Target job description..."
}Response:
{
"id": 1,
"content": "Tailored resume content optimized for the job...",
"base_resume": false,
"created_at": "2025-11-24T12:00:00"
}POST /applications/submit
Content-Type: application/json
{
"job_id": 1,
"resume_id": 1
}Response:
{
"id": 1,
"job_id": 1,
"resume_id": 1,
"status": "submitted",
"created_at": "2025-11-24T12:00:00"
}ai-job-agent/
βββ app/
β βββ __init__.py
β βββ main.py # FastAPI application & endpoints
β βββ database.py # Database configuration
β βββ models.py # SQLAlchemy models
β βββ agent.py # LangChain agent setup
β βββ services/
β βββ __init__.py
β βββ scraper.py # Base scraper & factory
β βββ resume.py # LLM resume builder
β βββ submitter.py # Application submitter
β βββ scrapers/
β βββ __init__.py
β βββ linkedin.py # LinkedIn-specific scraper
βββ tests/
β βββ test_basic.py # Service unit tests
β βββ test_scraper.py # Scraper tests
β βββ test_resume_llm.py # LLM integration tests
β βββ test_api.py # API endpoint tests
βββ requirements.txt
βββ .env.example
βββ README.md
User provides URL
β
ScraperFactory selects appropriate scraper
β
Playwright launches browser β Navigates to URL
β
BeautifulSoup parses HTML β Extracts data
β
Job saved to database β Returns Job object
User provides base resume + job description
β
ResumeBuilder initializes ChatOpenAI
β
Prompt template formats input
β
LLM generates tailored resume
β
Resume saved to database β Returns Resume object
User provides job_id + resume_id
β
Application record created (status: PENDING)
β
ApplicationSubmitter routes to correct submitter
β
LinkedIn: Launch browser β Login β Navigate to job β Click Easy Apply
β
Fill form fields β Submit β Update status
Run all tests:
DATABASE_URL=sqlite:///./test.db pytest tests/Run specific test file:
DATABASE_URL=sqlite:///./test.db pytest tests/test_api.py -vWith coverage:
DATABASE_URL=sqlite:///./test.db pytest --cov=app tests/PostgreSQL (Production):
DATABASE_URL=postgresql://user:password@localhost:5432/jobagentSQLite (Development):
DATABASE_URL=sqlite:///./dev.dbThe ScraperFactory automatically selects scrapers based on URL:
- Contains
linkedin.comβLinkedInScraper - Default β
MockScraper
Edit app/services/resume.py to customize:
self.llm = ChatOpenAI(
temperature=0.7, # Creativity (0-1)
model="gpt-3.5-turbo" # Model selection
)Available models:
gpt-3.5-turbo(faster, cheaper)gpt-4(higher quality, slower)
- Never commit
.envfile or API keys to version control - Use environment variables for all sensitive data
- Implement rate limiting for production deployments
- Consider using background tasks (Celery/Arq) for long-running operations
- Add authentication/authorization for production API
- LinkedIn Scraper: Only handles public job pages (no authentication)
- Application Submitter: Skeleton implementation (requires manual login setup)
- Error Recovery: Limited retry logic for network failures
- Rate Limiting: No built-in request throttling
- Complete LinkedIn Easy Apply automation with cookie-based auth
- Add Indeed, Glassdoor scrapers
- Implement background task queue (Celery)
- Add job matching/filtering based on criteria
- Email notifications for application status
- Web UI dashboard for monitoring
- Export applications to CSV/PDF
- Multi-tenant support with user authentication
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
This project is licensed under the MIT License - see LICENSE file for details.
- FastAPI - Web framework
- LangChain - LLM orchestration
- Playwright - Browser automation
- OpenAI - GPT models
For issues and questions:
- Open an issue on GitHub
- Contact: [your-email@example.com]
Note: This tool is for educational purposes. Always respect websites' Terms of Service and robots.txt when scraping. Use responsibly and ethically.