Focusing on LinkedIn, Indeed, and Direct Company Portals for the Indian Tech Market.
An intelligent, autonomous agent that automates the entire job application process—from finding relevant openings based on your resume to assisting with the application itself.
- 📄 Resume-Driven Job Search: Parses your resume to extract key technical skills and uses them to generate dynamic, relevant search queries.
- 🤖 AI-Powered Job Matching: Employs a relevance scoring algorithm to rank job openings based on how well they match the skills in your resume.
- ✨ Gemini-Powered Insights: Uses Google's Gemini to generate custom "talking points" and tailored application summaries for each job, highlighting your strengths.
- ⚙️ Multi-Platform Automation: Built with a modular framework (using Selenium) to handle applications across different platforms like LinkedIn, Workday, and Greenhouse.
- ✅ Interactive UI: A user-friendly interface built with Streamlit that allows you to review, approve, or reject jobs, giving you full control over the application process.
- 🗄️ Persistent Job Tracking: Uses a local SQLite database to store and manage the status of all job applications (
found,applying,rejected,applied).
The agent follows a sophisticated, multi-step process to streamline your job hunt:
- Resume Parsing: You upload your resume, and the agent's parser (
src/parser.py) extracts your technical skills. - Automated Scraping: The Selenium-based scraper (
src/scraper.py) launches a browser, navigates to job boards, and uses your skills to find hundreds of relevant job listings. - Relevance Scoring & Storing: Each job is scored by the
src/matcher.pybased on skill overlap in the title and description. All jobs are then saved to a local SQLite database (job_applications.db). - Interactive Review: The Streamlit UI (
app.py) displays the found jobs as interactive cards. You have full control to:- View Job Criteria: See the full job description.
- Get AI Insights: Generate a custom summary of how your skills match the job.
- Reject: Dismiss the job from your queue.
- Approve & Apply: Move the job to the application stage.
- AI-Assisted Application: When you approve a job, the agent uses Gemini (
src/llm_helper.py) to craft tailored text for your application. - Browser Automation: The automation module (
src/automator.py) takes over, opens the job link in its own browser, and can assist in filling out the application fields on your behalf for your final review and submission.
- Backend & Automation: Python, Selenium WebDriver
- Frontend / UI: Streamlit
- AI & Language Model: Google Gemini
- Data Storage: SQLite, Pandas
- Parsing: BeautifulSoup, PyMuPDF (for PDFs)
Get the agent running on your local machine in a few simple steps.
1. Clone the Repository:
git clone https://github.com/Praneeth0526/Job_Agent.git
cd Job_Agent2. Create a Virtual Environment (Recommended):
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts�ctivate`3. Install Dependencies:
pip install -r requirements.txt4. Set Up Your API Key:
- Create a file named
.envin the root directory of the project. - Open the
.envfile and add your Google Gemini API key:
GEMINI_API_KEY=your_actual_api_key_goes_here
1. Place Your Resume:
- Add your resume PDF to the
Resume/directory.
2. Run the Streamlit App:
streamlit run streamlit_app.py- The application will open in a new browser tab.
3. Start the Agent:
- In the Streamlit UI, confirm the path to your resume.
- Click the "Fetch & Rank Jobs" button to start the scraping process.
- Review and manage the jobs found directly from the UI.
The repository is organized to be modular and scalable:
Job_Agent/
├── Resume/
│ └── New_resume.pdf # Your resume file
├── src/
│ ├── automator.py # Selenium logic for applying to jobs
│ ├── database.py # SQLite database management
│ ├── llm_helper.py # Gemini API integration and prompts
│ ├── matcher.py # Job-to-resume relevance scoring
│ ├── parser.py # Resume and job description parsing
│ └── scraper.py # Web scraping logic
├── .env # Stores your API keys (create this yourself)
├── app.py # The main Streamlit application file
├── job_applications.db # Local SQLite database (auto-generated)
└── requirements.txt # Project dependencies
This project has a strong foundation with many exciting possibilities for future development:
- Full End-to-End Application: Complete the final "submit" step in the automation script.
- Advanced Platform Adapters: Build out more robust logic in
automator.pyfor specific platforms like Workday and Lever. - GUI-based Field Mapping: Create a UI where users can visually map their resume details (like "First Name," "Email") to form field IDs on different job sites.
- Chrome Extension: Develop a companion Chrome extension to trigger the agent directly from a job posting page.
- Cloud Deployment: Deploy the Streamlit app to a service like Heroku or Streamlit Community Cloud for public access.
Contributions, issues, and feature requests are welcome!
Feel free to check the issues page.
This project is licensed under the Apache-2.0 License. See the LICENSE file for details.
