- π― Overview
- β¨ Key Features
- ποΈ Architecture
- π Project Structure
- π οΈ Installation
- π¦ Quick Start
- π§ͺ Running Tests
- π Reporting
- π§ Configuration
- π API Documentation
- π€ Contributing
- π€ Author
This is a professional-grade API testing framework built with Python and Pytest, specifically designed for comprehensive testing of REST APIs. The framework follows industry best practices and implements a Service Object Model (SOM) - the API testing equivalent of Page Object Model (POM) used in UI testing.
The framework is configured to test JSONPlaceholder, a free fake REST API for testing and prototyping, providing endpoints for:
- π₯ Users (
/users) - π Posts (
/posts) - π¬ Comments (
/comments)
- Service Object Model (SOM): Modular service classes for each API endpoint
- Data Models & DTOs: Structured data representation with validation
- Custom Assertions: Specialized assertions for API testing scenarios
- Comprehensive Logging: Multi-level logging with file and console output
- JSON Schema Validation: Automatic response structure validation
- Response Time Monitoring: Performance tracking and assertions
- Data-Driven Testing: External test data management
- Parameterized Tests: Efficient test case multiplication
- Custom Markers: Organized test categorization
- HTML Reports: Beautiful, interactive test reports
- Coverage Reports: Code coverage analysis and visualization
- Allure Integration: Enterprise-grade test reporting
- Performance Metrics: Response time tracking and analysis
- Type Hints: Full type annotation coverage
- Code Formatting: Black code formatter integration
- Linting: Flake8 static analysis
- CI/CD Ready: GitHub Actions compatible
The framework implements a layered architecture following the Service Object Model (SOM) pattern:
βββββββββββββββββββ
β Test Layer β β Test classes with AAA pattern
βββββββββββββββββββ€
β Assertion Layer β β Custom API-specific assertions
βββββββββββββββββββ€
β Service Layer β β Business logic for each endpoint (SOM)
βββββββββββββββββββ€
β Model Layer β β Data models and DTOs
βββββββββββββββββββ€
β Client Layer β β HTTP client and core functionality
βββββββββββββββββββ
Similar to how POM encapsulates UI elements and actions, SOM encapsulates:
- API Endpoints: Each service class represents an API resource
- HTTP Operations: Methods for GET, POST, PUT, PATCH, DELETE
- Business Logic: Complex operations and data transformations
- Validation: Response structure and data validation
pytest-python-api-sample/
βββ π core/ # Core framework components
β βββ __init__.py
β βββ api_client.py # HTTP client implementation
β βββ logger.py # Professional logging system
β βββ models.py # Data models and DTOs
βββ π services/ # Service Object Model (SOM)
β βββ __init__.py
β βββ base_service.py # Base service class
β βββ user_service.py # User endpoint service
β βββ post_service.py # Post endpoint service
β βββ comment_service.py # Comment endpoint service
βββ π utils/ # Utility modules
β βββ assertions.py # Custom API assertions
β βββ config_loader.py # Configuration management
βββ π tests/ # Test implementation
β βββ conftest.py # Pytest fixtures and setup
β βββ test_users.py # User endpoint tests
β βββ test_posts.py # Post endpoint tests
β βββ test_comments.py # Comment endpoint tests
βββ π data/ # Test data management
β βββ test_data.json # External test data
βββ π schemas/ # JSON schema validation
β βββ user_schema.json # User response schema
β βββ post_schema.json # Post response schema
β βββ comment_schema.json # Comment response schema
βββ π config/ # Configuration files
β βββ config.yaml # API configuration
βββ π reports/ # Test reports and artifacts
βββ π logs/ # Log files
βββ π pytest.ini # Pytest configuration
βββ π requirements.txt # Python dependencies
βββ π README.md # Project documentation
- Python 3.8+
- pip package manager
- Git (for cloning)
git clone https://github.com/yourusername/pytest-python-api-sample.git
cd pytest-python-api-sample# Windows
python -m venv .venv
.venv\Scripts\activate
# macOS/Linux
python3 -m venv .venv
source .venv/bin/activatepip install -r requirements.txtpytest --version
python -c "import requests; print('β
Installation successful!')"# Run all tests
pytest
# Run specific test file
pytest tests/test_users.py
# Run specific test function
pytest tests/test_users.py::TestUsersAPI::test_get_single_user
# Run with verbose output
pytest -v# Run only integration tests
pytest -m integration
# Run only GET request tests
pytest -m get
# Run user endpoint tests
pytest -m users
# Run positive test cases
pytest -m positive# HTML report
pytest --html=reports/report.html
# Coverage report
pytest --cov=core --cov=services --cov-report=html
# Allure report
pytest --alluredir=reports/allure
allure serve reports/allurepytest -m unit # Unit tests
pytest -m integration # Integration tests
pytest -m smoke # Smoke tests
pytest -m regression # Regression tests
pytest -m performance # Performance testspytest -m get # GET request tests
pytest -m post # POST request tests
pytest -m put # PUT request tests
pytest -m patch # PATCH request tests
pytest -m delete # DELETE request testspytest -m users # User endpoint tests
pytest -m posts # Post endpoint tests
pytest -m comments # Comment endpoint testspytest -m positive # Happy path tests
pytest -m negative # Error scenario tests
pytest -m boundary # Boundary value tests# Run tests in parallel (requires pytest-xdist)
pytest -n auto
# Run with specific number of workers
pytest -n 4# Set log level
LOG_LEVEL=DEBUG pytest
# Set API base URL
API_BASE_URL=https://custom-api.com pytest
# Set test environment
TEST_ENVIRONMENT=staging pytestGenerate beautiful, interactive HTML reports:
pytest --html=reports/pytest_report.html --self-contained-htmlTrack code coverage across the framework:
pytest --cov=core --cov=services --cov-report=html:reports/coverageEnterprise-grade reporting with Allure:
# Generate Allure data
pytest --alluredir=reports/allure
# Serve interactive report
allure serve reports/allure
# Generate static report
allure generate reports/allure -o reports/allure-reportMonitor API performance:
# Run with timing information
pytest --durations=10
# Benchmark tests (requires pytest-benchmark)
pytest --benchmark-onlybase_url: "https://jsonplaceholder.typicode.com"
headers:
Content-Type: "application/json"
User-Agent: "API-Testing-Framework/1.0"
timeout: 10
retry_attempts: 3The framework includes comprehensive pytest configuration:
- Custom markers for test organization
- HTML and coverage reporting
- Parallel execution support
- Performance monitoring
- Logging configuration
# API Configuration
export API_BASE_URL="https://jsonplaceholder.typicode.com"
export LOG_LEVEL="INFO"
export TEST_ENVIRONMENT="local"
# Test Execution
export PYTEST_WORKERS="4"
export COVERAGE_THRESHOLD="80"from services import UserService
user_service = UserService(api_client)
# Get all users
response = user_service.get_all_users()
# Get user by ID
response = user_service.get_user_by_id(1)
# Create user
user_data = {"name": "Test User", "username": "test", "email": "test@example.com"}
response = user_service.create_user(user_data)from services import PostService
post_service = PostService(api_client)
# Get all posts
response = post_service.get_all_posts()
# Get posts by user
response = post_service.get_posts_by_user_id(1)
# Create post
post_data = {"title": "Test Post", "body": "Test content", "userId": 1}
response = post_service.create_post(post_data)from services import CommentService
comment_service = CommentService(api_client)
# Get all comments
response = comment_service.get_all_comments()
# Get comments by post
response = comment_service.get_comments_by_post_id(1)from utils.assertions import assert_status_code, assert_response_time, assert_json_structure
# Status code assertion
assert_status_code(response, 200)
# Response time assertion
assert_response_time(response, max_time=2.0)
# JSON structure assertion
assert_json_structure(response, required_fields=["id", "name", "email"])- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes
- Run tests:
pytest - Run linting:
flake8 - Format code:
black . - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
- Type Hints: All functions must include type hints
- Docstrings: Use Google-style docstrings
- Testing: Maintain 80%+ code coverage
- Formatting: Use Black for code formatting
- Linting: Pass Flake8 checks
type(scope): description
feat(users): add user validation service
fix(posts): resolve post creation bug
docs(readme): update installation instructions
test(comments): add boundary value tests
Israel Wasserman
- π LinkedIn
- πΌ Senior QA Engineer | Python Automation Developer
- π§ Email: [contact information]
Senior QA Engineer with extensive experience in:
- π§ Test Automation Framework Development
- π Python Development and Best Practices
- π API Testing and Validation
- ποΈ CI/CD Pipeline Integration
- π Test Strategy and Quality Metrics
This project is licensed under the MIT No Attribution License - see the LICENSE file for details.
Simple Summary: You can use this code for any purpose without any restrictions or attribution requirements.
β Star this repository if it helped you! β
Built with β€οΈ for the testing community