Getting Started with the Java-Based End-to-End Test Automation Framework:
This Java-based test automation framework is designed to support end-to-end test coverage across web, API, and database layers, with seamless CI/CD integration and enterprise-grade extensibility.
- Web UI Testing using Selenium WebDriver
- API Testing with REST-assured and Java SAAJ for SOAP
- Database Validation via JDBC with SQL/NoSQL utility support
- Test Data Management with Excel, JSON, database queries, and runtime data generation
- Dynamic Configuration using
.properties,.yaml, or.jsonwith CLI/CI parameters - Structured Logging & Reporting with Log4j/SLF4J, ExtentReports, email and screenshot support
- CI/CD Ready: Jenkins, GitHub Actions, GitLab, and cloud execution (e.g., Sauce Labs)
- Self-Healing Test Automation:
- AI-powered element locator healing for resilient UI tests
- Automatically recovers from locator changes using semantic context and RAG
- Reduces test flakiness and maintenance effort
- Retrieval-Augmented Generation (RAG):
- Semantic search and answer synthesis over your test docs and codebase
- Supports OpenAI, HuggingFace, and local Ollama embeddings
- Multi-provider, local/cloud, and offline AI support
- Persistent Embedding Cache:
- Embeddings are computed once per document chunk and reused for all future runs
- Massive speedup and cost savings for repeated queries
- Chatbot/NLP & Conversational AI:
- Conversational UI for natural language queries
- Integrates with RAG and knowledge base for context-aware answers
- Supports both local and cloud LLMs
- Lightweight Java HTTP server for AI-powered test automation workflows
- Endpoints for:
- Page object generation from prompt files
- Playwright/Selenium test generation and execution
- Automated code review of generated tests (
/mcp/test-code-review) - Reporting: lists test run artifacts and logs (
/mcp/report)
- Pluggable AI clients: Supports RAG, OpenAI, and local LLMs for prompt completion and code review
- Playwright bridge: Runs Playwright tests via a Node.js subprocess for browser automation
- MongoDB context store: (optional) for storing workflow and conversation context
- Demo/test script:
scripts/test_mcp.shexercises all endpoints for quick validation
Quick start:
# Start the MCP server (from project root)
# Or run directly with Java if built:
# In another terminal, run the demo script:
# π AI-Powered Features
**Endpoints:**
- `POST /mcp/generate-page-object` β Generate page objects from prompt
- `POST /mcp/generate-and-run-playwright-test` β Generate & run Playwright test
- `POST /mcp/generate-and-run-selenium-test` β Generate & run Selenium test
- `POST /mcp/test-code-review` β Automated AI code review for test code
- `GET /mcp/report` β List test run reports and logs
**Usage Example:**
Start the MCP server:
```bash
mvn exec:java -Dexec.mainClass="org.k11techlab.framework.ai.mcp.MCPServer"Generate a page object from a prompt file:
curl -X POST http://localhost:8090/mcp/generate-page-object \
-H "Content-Type: application/json" \
-d '{"promptFile":"k11softwaresolutions/pages/pageobject_creation_prompt_multi.txt"}'Review a test code snippet with AI:
curl -X POST http://localhost:8090/mcp/test-code-review \
-H "Content-Type: text/plain" \
--data-binary @YourTestFile.javaFetch a list of test run reports and logs:
curl http://localhost:8090/mcp/reportBuilt entirely with open-source libraries, this framework is fully extensibleβready to scale for validations involving files, emails, microservices, or third-party system integrations.
Selenium Automation Framework Architecture β Β© 2025 Kavita Jadhav. All rights reserved.
flowchart TD
A[Test Suite] --> B(Core Framework)
B --> C[AI Integration Layer]
C --> D[RAG Engine]
C --> E[Self-Healing Engine]
C --> J[Chatbot/NLP & Conversational AI]
C --> M[MCP Server]
M --> C
M --> N[Playwright/Selenium Bridge]
M --> O[Prompt Files]
D --> F[Embedding Providers]
D --> G[Embedding Cache]
D --> H[Docs/Knowledge Base]
E --> I[Locator Healing]
J --> K[Conversational UI]
AI Features Integration: Shows how RAG, embedding cache, and self-healing plug into the core automation framework. Render this diagram with Mermaid for a visual overview.
This framework now includes advanced AI-powered features for smarter, context-aware automation and documentation search:
- Self-Healing Test Automation:
- AI-powered element locator healing for resilient UI tests
- Automatically recovers from locator changes using semantic context and RAG
- Reduces test flakiness and maintenance effort
- Retrieval-Augmented Generation (RAG):
- Semantic search and answer synthesis over your test docs and codebase
- Supports OpenAI, HuggingFace, and local Ollama embeddings
- Multi-provider, local/cloud, and offline AI support
- Persistent Embedding Cache:
- Embeddings are computed once per document chunk and reused for all future runs
- Massive speedup and cost savings for repeated queries
- Model Control Plane (MCP) Server:
- Lightweight Java HTTP server for AI-powered test generation, review, and workflow orchestration
- Exposes endpoints for page object generation, Playwright/Selenium test execution, automated code review, and reporting
- Integrates with RAG, OpenAI, and local LLMs; bridges to Playwright/Selenium; supports prompt-driven automation
- AI Demo & Documentation:
These enhancements make the framework ready for next-generation, AI-assisted, and self-healing test automation and knowledge retrieval.
The framework is composed of well-structured layers to ensure modularity, maintainability, and scalability across complex enterprise test environments.
- Driver management (Selenium Grid/local/cloud)
- Config loading from external files
- Page Object Model (POM) structure
- Test data provider (Excel/JSON/DB)
- Wait utilities (explicit/implicit/fluent)
- File, JSON, Excel handlers
- REST & SOAP service clients
- DB interaction (JDBC-based)
- Locator and email utilities
- Test cases built on Base Test structure
- POM-based interactions
- Data-driven via
@DataProvider - Configurable execution (env, role, browser)
- Domain-Specific Language (DSL) support for readability
- Run tests locally, via Docker, VMs, or cloud (Sauce Labs, BrowserStack)
- Supports headless execution
- Retry analyzer and failure recovery
- Data cleanup & environment reset utilities
- Jenkins / GitHub Actions ready
- Parameterized build support
- Maven-based dependency management
- Artifactory/Nexus for internal libs
- Centralized exception handling
- Custom exception types
- Retry mechanism (TestNG-based)
- Safe teardown and recovery logic
- ExtentReports/Allure HTML reports
- Log4j/SLF4J structured logs
- Screenshot capture on failure
- Email notifications with test summaries
- Supports Web, Mobile, SOAP, REST API testing
- Dynamic configuration & data handling
- Cloud-ready & DevOps integrated
- Extensible for:
- File-based validations (local/FTP)
- Email workflows
- Microservices architecture
- Localization, accessibility, performance testing
- Java 8+
- Selenium WebDriver
- REST-assured
- SAAJ API
- TestNG
- Apache POI / Jackson / Gson
- Log4j / SLF4J
- ExtentReports / Allure
- JDBC
- Maven
- Java 11+
- Maven 3.6+
- Git
- Chrome or Firefox browser
- IDE (e.g., IntelliJ, Eclipse)
git clone https://github.com/K11-Software-Solutions/k11-techlab-selenium-java-automation-framework.git
cd k11-techlab-selenium-java-automation-framework-
Edit config files in:
src/test/resources/config/Customize:
baseUrlbrowser- Timeouts, credentials, etc.
-
Ensure browser drivers (e.g., ChromeDriver) are in system path or configured in test base.
-
Use TestNG XML for specific suites:
src/test/resources/testng/
mvn clean test
mvn clean test -DsuiteXmlFile=smoke.xmlmvn allure:report
allure serve target/allure-results- UI regression and smoke testing
- Cross-browser automation
- Framework learning or extension baseline
- CI integration with test reporting
This project is licensed under the MIT License β see the LICENSE file for details.
For consulting, training, or implementation support:
π softwaretestautomation.org
π k11softwaresolutions.com
π§ k11softwaresolutions@outlook.com
