Skip to content

K11-Software-Solutions/k11techlab-playwright-python-ai-assisted-framework

Repository files navigation

Author

Kavita Jadhav

Accomplished Full Stack Developer and Test Automation Engineer specializing in modern web application development, robust full stack solutions, and scalable automation frameworks. Expert in Playwright, advanced quality engineering, and driving best practices for high-impact, reliable software delivery.

LinkedIn: https://www.linkedin.com/in/kavita-jadhav-tech/

K11 Software Solutions Logo Playwright Logo Python Logo

Playwright Python AI-Assisted Test Automation Framework

This repository provides a modern, maintainable automation solution crafted specifically for k11softwaresolutions.com. Built with Playwright and Python, the framework is designed for clarity, scalability, and real-world QA needs. It features a modular Page Object Model (POM), advanced AI-assisted testing, and robust support for data-driven and parallel test execution. The architecture is optimized for subscription and service-based SaaS flows, with a focus on maintainability and extensibility for teams and individuals alike.


Table of Contents


Features

Key Capabilities

  • Modular Page Object Model (POM): Maintainable, reusable page abstractions for robust test logic.
  • Flexible Data-Driven Testing: Easily run tests with data from CSV, JSON, Excel, or SQL sources.
  • Parallel & Cross-Browser Execution: Run tests concurrently across Chromium, Firefox, and WebKit for speed and coverage.
  • Headless/Headed Modes: Choose between visible or background browser runs for local or CI environments.
  • Smart Waits & Stability: Leverages Playwright’s auto-waiting for reliable, flake-resistant tests.
  • Automatic Artifacts: Screenshots, video, and trace files are captured for every failure to aid debugging.
  • Retry & Marker Support: Rerun flaky tests and organize suites with custom markers (sanity, regression, etc.).
  • Dynamic Test Data: Generate realistic and edge-case data using the Faker library and custom utilities.
  • Comprehensive Reporting: HTML and Allure reports with screenshots, videos, and trace integration.
  • Centralized & Overrideable Configuration: Manage all settings in one place, with command-line flexibility.
  • Reusable Fixtures: Clean, DRY setup and teardown for browsers, pages, and test data.
  • Extensive Logging: Detailed logs for every test run and artifact.

Advanced & AI-Assisted Features

  • AI-Assisted Testing: Explore all AI-powered features & roadmap →
  • Self-Healing Locators: Automatically adapt to UI changes using AI/ML or heuristics.
  • Test Impact Analysis: AI suggests or selects only the relevant tests to run after code changes.
  • Automated Test Generation: Generate new test cases from requirements or logs using LLMs.
  • Intelligent Failure Analysis: AI reviews failed test logs and screenshots to suggest root causes.
  • Natural Language Test Authoring: Write tests in plain English and convert to Playwright code.
  • Visual Regression with AI: Smarter image comparison that ignores minor, irrelevant UI changes.

See the AI-Assisted Testing Features section below for full details and roadmap.


Framework Architecture

┌────────────────────────────────────────────────────────────────────┐
│                        TEST LAYER                                 │
│  (tests/) - Test logic, assertions, and AI-driven scenarios │
└───────────────────────┬───────────────────────────────────────────┘
                        │
┌───────────────────────▼───────────────────────────────────────────┐
│                    PAGE OBJECT LAYER                              │
│  (pages/) - UI locators, page actions, and adaptive selectors     │
└───────────────────────┬───────────────────────────────────────────┘
                        │
┌───────────────────────▼───────────────────────────────────────────┐
│                UTILITIES & AI/ML LAYER                            │
│  (utilities/) - Data utilities, randomization, and AI/ML helpers  │
└───────────────────────┬───────────────────────────────────────────┘
                        │
┌───────────────────────▼───────────────────────────────────────────┐
│                CONFIGURATION & INTEGRATION LAYER                  │
│  (config.py, conftest.py, pytest.ini, AI/LLM config)              │
└───────────────────────────────────────────────────────────────────┘

Technologies & Tools

Technology / Library Role in the Framework
Python 3.8+ Core language for all test logic and framework code
Playwright Fast, reliable browser automation across all major browsers
Pytest Flexible, powerful test runner and assertion engine
pytest-xdist Enables parallel test execution for speed and scalability
pytest-html Generates detailed HTML reports for test runs
Allure Advanced analytics and reporting with rich attachments
pytest-rerunfailures Automatic retries for flaky or unstable tests
Faker Produces dynamic, realistic, and edge-case test data
openpyxl Reads and writes Excel files for data-driven testing
AI/LLM Integrations Empowers test data, selectors, and analysis with AI/LLMs
Custom ML/Heuristics Drives self-healing locators, test impact, and visual checks

Prerequisites

Before you begin, ensure you have the following installed:

  • Python 3.8 or higher - Download Python
  • pip - Python package installer (comes with Python)
  • Git - Download Git
  • IDE - VS Code, PyCharm, or any Python IDE

Installation & Setup

1. Clone the Repository

git clone https://github.com/K11-Software-Solutions/k11techlab-playwright-python-ai-assisted-framework.git
cd k11techlab-playwright-python-ai-assisted-framework

2. Create Virtual Environment (Recommended)

Windows:

python -m venv venv
venv\Scripts\activate

macOS/Linux:

python3 -m venv venv
source venv/bin/activate

3. Install Dependencies

pip install -r requirements.txt
pip install -r requirements-ai.txt

4. Install Playwright Browsers

playwright install

Or install specific browsers:

playwright install chromium
playwright install firefox
playwright install webkit

5. Verify Installation

pytest --version
playwright --version

Project Structure

k11techlab-playwright-python-ai-assisted-framework/
│
├── config.py                      # Test configuration and credentials
├── conftest.py                    # Pytest fixtures and hooks
├── pytest.ini                     # Pytest configuration
├── requirements.txt               # Python dependencies
├── README.md                      # Project documentation
│
├── pages/                         # Page Object Model classes (about_page.py, login_page.py, etc.)
│   ├── __init__.py
│   ├── home_page.py              # Home page actions & locators
│   ├── login_page.py             # Login page actions & locators
│   ├── about_page.py             # About page actions & locators
│   ├── contact_page.py           # Contact page actions & locators
│   ├── dashboard_page.py         # Dashboard page actions & locators
│   ├── forgot_password_page.py   # Forgot password page actions & locators
│   ├── home_page.py              # Home page actions & locators
│   ├── insights_page.py          # Insights page actions & locators
│   ├── login_page.py             # Login page actions & locators
│   ├── logout_page.py            # Logout page actions & locators
│   ├── register_page.py          # Registration page actions & locators
│   ├── reset_password_page.py    # Reset password page actions & locators
│   └── service_page.py           # Service/Subscription page actions & locators
│
├── tests/
│   ├── k11-platform/              # Main test cases (test_login_data_driven.py, test_login_page.py, etc.)
│   ├── playwright-advanced/       # Advanced Playwright scenarios
│   ├── playwright-examples/       # Example Playwright tests
│   └── ai/                        # AI-related test cases

│
├── utilities/                     # Helper utilities
│   ├── __init__.py
│   └── data_reader.py       # Read data from CSV/JSON/Excel/SQL files
│
├── ai/                            # AI/ML-powered utilities and models
│   ├── random_data.py       # Generate random, scenario-based, and AI-powered test data
│   └── self_healing.py      # Self-healing locator utility (AI/ML/LLM powered)
│
├── mcp_prompts/                   # Prompt templates for LLM/AI features
│   └── testdata_generation_prompt.txt  # Example: prompt for AI test data generation
│
├── testdata/                      # Test data files
│   ├── logindata.json            # Login test data (JSON)
│   ├── logindata.csv             # Login test data (CSV)
│   ├── logindata.xlsx            # Login test data (Excel)
│   └── test_users.sql            # SQL test data for data-driven tests
│
└── reports/                       # Test execution reports
    ├── myreport.html             # HTML report
    ├── screenshots/              # Failed test screenshots
    ├── videos/                   # Test execution videos
    ├── traces/                   # Playwright trace files
    ├── allure-results/           # Allure raw results
    └── allure-report/            # Allure HTML report

Running Tests

Basic Test Execution

# Run all tests
pytest

# Run specific test file
pytest tests/k11-platform/test_login_page.py

# Run specific test function
pytest tests/k11-platform/test_login_page.py::test_valid_user_login

# Run tests with verbose output
pytest -v

# Run tests with print statements visible
pytest -s

Browser Selection

# Run with Chromium (default)
pytest --browser=chromium

# Run with Firefox
pytest --browser=firefox

# Run with WebKit (Safari)
pytest --browser=webkit

Headed vs Headless Mode

# Run in headless mode (default)
pytest

# Run in headed mode (visible browser)
pytest --headed

Parallel Execution

# Run tests in parallel using 4 workers
pytest -n 4

# Run tests in parallel using auto-detected CPU count
pytest -n auto

Test Markers

# Run only sanity tests
pytest -m sanity

# Run only regression tests
pytest -m regression

# Run sanity OR regression tests
pytest -m "sanity or regression"

# Run tests excluding certain markers
pytest -m "not sanity"

Data-Driven Tests

# Run data-driven tests
pytest -m datadriven

# Run specific data-driven test
pytest tests/k11-platform/test_login_data_driven.py

Rerun Failed Tests

# Rerun failed tests 2 times with 1 second delay
pytest --reruns 2 --reruns-delay 1

Custom Test Run Examples

# Run with specific base URL
pytest --base-url=https://your-app-url.com

# Run with video recording
pytest --video=on

# Run with screenshot on failure
pytest --screenshot=only-on-failure

# Run with tracing enabled
pytest --tracing=on

# Combination example
pytest -v -n 4 --browser=firefox --headed -m sanity

Configuration

pytest.ini Configuration

The pytest.ini file contains default test execution settings:

[pytest]
addopts =
    -v                                          # Verbose output
    --browser=chromium                          # Default browser
    --base-url=https://k11softwaresolutions.com/ # Base URL
    --video=retain-on-failure                   # Video recording
    --screenshot=only-on-failure                # Screenshot capture
    --tracing=retain-on-failure                 # Trace files
    --html=reports/myreport.html                # HTML report
    --alluredir=reports/allure-results          # Allure results

config.py - Test Data

Update config.py with your test credentials and data:

class Config:
    email = "test123@abc.com"
    password = "testpass"
    invalid_email = "testl123@abc.com"
    invalid_password = "test@123xyz"
    service_name = "Consulting"
    # Add more configuration as needed

Command-Line Overrides

Any pytest.ini setting can be overridden via command line:

pytest --browser=firefox --base-url=https://staging.example.com

Reporting

HTML Report

After test execution, open the HTML report:

# Report location
reports/myreport.html

Features:

  • Test execution summary
  • Pass/fail statistics
  • Test duration
  • Error details and stack traces
  • Embedded screenshots

Allure Report

Generate and view Allure report:

# Generate Allure report from results
allure generate reports/allure-results --clean -o reports/allure-report

# Open Allure report in browser
allure open reports/allure-report

# Or serve the report
allure serve reports/allure-results

Allure Report Features:

  • Trends and statistics
  • Test case duration graphs
  • Screenshots and attachments
  • Video recordings
  • Detailed test steps
  • Test categorization
  • Retry information

Debug Artifacts

When tests fail, the following artifacts are automatically captured:

  • Screenshots: reports/screenshots/
  • Videos: reports/videos/
  • Traces: reports/traces/ (open with playwright show-trace <trace-file>)

Best Practices Implemented

Best Practices Implemented (Summary)

  • Page Object Model (POM): Clear separation, reusable components, easy maintenance.
  • DRY Principle: Shared fixtures, utility classes, centralized config.
  • Naming Conventions: Descriptive test names, clear variables, consistent structure.
  • Error Handling: Robust try-except, meaningful errors, graceful failures.
  • Documentation: Inline comments, docstrings, comprehensive README.
  • Version Control: .gitignore, requirements files, clean commits.
  • Scalability: Modular, extensible, multi-environment support.
  • CI/CD Ready: CLI config, parallel execution, multiple report formats.

Contributing

We welcome contributions from the community! To get started:

  1. Fork this repository to your own GitHub account.
  2. Create a new branch for your feature or fix (git checkout -b feature/YourAmazingFeature).
  3. Make your changes and commit them (git commit -m 'Add: YourAmazingFeature').
  4. Push your branch to your fork (git push origin feature/YourAmazingFeature).
  5. Open a Pull Request describing your changes.

For AI-powered automation enhancements, please use a branch name starting with ai-powered-automation/ (e.g., ai-powered-automation/your-feature).

Contribution Guidelines

  • Adhere to the PEP 8 style guide for Python code.
  • Add or update tests for any new features or bug fixes.
  • Update documentation as needed to reflect your changes.
  • Ensure all tests pass locally before submitting your PR.

Author

Kavita Jadhav

Test Automation Engineer with expertise in scalable test frameworks, Playwright, and quality engineering best practices.

License

This project is licensed under the MIT License - see the LICENSE file for details.


Acknowledgments


Future Enhancements

  • AI-powered API testing and validation
  • Advanced visual regression with AI/ML
  • Autonomous test generation and maintenance using LLMs
  • Self-healing and adaptive test suites
  • CI/CD pipeline examples (GitHub Actions, Jenkins) with AI-driven test selection
  • Docker containerization and cloud-native execution
  • Performance and reliability testing with AI-based analysis
  • Mobile and cross-platform automation with AI support
  • Cloud execution and scaling (BrowserStack, Sauce Labs, Azure, AWS)
  • Customizable, AI-enhanced reporting and analytics
  • Integration with test management and ALM tools (TestRail, Zephyr, Jira)

AI-Assisted Testing Features

Implemented & Planned

  • AI-Powered Test Data Generation: Use LLMs or advanced Faker patterns to generate realistic, edge-case, or scenario-based test data automatically.
  • Self-Healing Locators: AI/ML or heuristic algorithms auto-update selectors when UI changes are detected, reducing test flakiness.
  • Test Impact Analysis: AI analyzes code changes and suggests or auto-selects only the relevant tests to run, speeding up CI.
  • Automated Test Case Generation: LLMs generate new test cases from requirements, user stories, or production logs.
  • Intelligent Failure Analysis: AI assistant analyzes failed test logs/screenshots and suggests likely root causes or fixes.
  • Natural Language Test Authoring: Write tests in plain English, which are then converted to Playwright code using an LLM.
  • Visual Regression with AI: AI-based image comparison for smarter visual regression, ignoring minor or irrelevant UI changes.

These features are being integrated to make the framework more robust, maintainable, and future-ready for modern QA needs.


Learning Resources

Recommended for Beginners

  1. Playwright Official Tutorial: playwright.dev/python
  2. Pytest Documentation: docs.pytest.org
  3. Python Testing with Pytest by Brian Okken
  4. Page Object Model Explained: Martin Fowler's Blog

Key Concepts Demonstrated

  • Designing robust, AI-powered test automation frameworks
  • Implementing the Page Object Model (POM)
  • Utilizing pytest fixtures and hooks
  • Creating data-driven and AI-driven test strategies
  • Generating and analyzing test reports (HTML, Allure, AI insights)
  • Applying browser automation and self-healing locator best practices
  • Leveraging LLMs for test generation, data, and analysis
  • Structuring and maintaining clean, scalable, and intelligent code

About k11 Software Solutions

k11 Software Solutions is a leading provider of modern, AI-powered test automation, DevOps, and quality engineering services. We help organizations accelerate digital transformation with robust, scalable, and intelligent automation solutions tailored for SaaS, web, and enterprise platforms.

Partner with us to future-proof your QA and automation strategy!

Follow Me

K11 Tech Lab k11softwaresolutions

About

A modern, AI-powered Playwright Python automation framework for web, mobile and enterprise apps. Features include modular POM, data-driven and parallel testing, self-healing locators, AI-assisted test generation, and rich reporting. Built for scalability, maintainability, and CI/CD-ready quality engineering.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages