This directory contains Locust test scripts for performance testing the API endpoints of the "Intelligent Advisor for Personal Finance & Investment" system. Each script targets a specific API section: admin, predictions, user, budget, profile, and assets. This README explains how to run the tests and store the results.
- Python 3.10+: Ensure Python is installed.
python --version
- Locust: Install Locust using pip.
pip install locust
- Environment Setup:
- Clone the repository or copy the
tests/locustdirectory. - Update test credentials and parameters in each script (e.g.,
test_user,test_pass, user IDs, tickers). - Ensure access to the staging API (
https://api-intellifinance.shancloudservice.com).
- Clone the repository or copy the
- Dependencies: Each script assumes FastAPI, SQLAlchemy, and other dependencies are handled server-side. No additional client-side dependencies are needed beyond Locust.
The Locust scripts are located in the ./locust directory:
tests/locust/
├── locust_admin.py # Tests admin section GET endpoints
├── locust_predictions.py # Tests predictions section GET endpoints
├── locust_user.py # Tests user section GET endpoints
├── locust_budget.py # Tests budget section GET endpoints
├── locust_profile.py # Tests profile section GET endpoints
├── locust_assets.py # Tests assets section GET endpoints
Each script is self-contained and tests all GET endpoints for its respective API section.
Before running tests, update the following in each script:
- Credentials: Replace
test_userandtest_passin theon_startmethod with valid staging credentials.response = self.client.post("/auth/login", json={ "username": "test_user", # Replace with valid username "password": "test_pass" # Replace with valid password })
- Parameters: Update user IDs, tickers, or other parameters (e.g.,
user_ids,tickers,screener_types) with valid values from the staging database.- Example for
locust_assets.py:tickers = ["AAPL", "GOOGL", "MSFT", "AMZN"] # Replace with valid tickers screener_types = ["growth", "value", "dividend", "tech"] # Replace with ScreenerType enum values
- Query the database or API to get valid values:
curl https://api-intellifinance.shancloudservice.com/assets/screener-types # For screener types
- Example for
- Verify credentials:
curl -X POST https://api-intellifinance.shancloudservice.com/auth/login -H "Content-Type: application/json" -d '{"username":"test_user","password":"test_pass"}'
Tests can be run in two modes:
- Web Mode: Interactive UI for configuring and monitoring tests.
- Headless Mode: Automated runs with CSV output for scripting or CI.
Run Locust with the web interface to configure users, spawn rate, and monitor results in real-time.
-
Command Template:
locust -f <script_name> --host=https://api-intellifinance.shancloudservice.com
Replace
<script_name>with the script file (e.g.,locust_admin.py). -
Section-Specific Commands:
- Admin:
locust -f tests/locust/locust_admin.py --host=https://api-intellifinance.shancloudservice.com
- Predictions:
locust -f tests/locust/locust_predictions.py --host=https://api-intellifinance.shancloudservice.com
- User:
locust -f tests/locust/locust_user.py --host=https://api-intellifinance.shancloudservice.com
- Budget:
locust -f tests/locust/locust_budget.py --host=https://api-intellifinance.shancloudservice.com
- Profile:
locust -f tests/locust/locust_profile.py --host=https://api-intellifinance.shancloudservice.com
- Assets:
locust -f tests/locust/locust_assets.py --host=https://api-intellifinance.shancloudservice.com
- Admin:
-
Access the UI:
- Open
http://localhost:8089in a browser (or the testing machine’s IP if remote). - Configure:
- Number of users: 10 (recommended for initial load testing).
- Spawn rate: 1 user/second.
- Host:
https://api-intellifinance.shancloudservice.com(pre-filled from command).
- Start the test and monitor response times, error rates, and RPS.
- Open
Run Locust without the UI for automated testing, saving results to CSV files.
-
Command Template:
locust -f <script_name> --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=<output_prefix>
<script_name>: Script file (e.g.,locust_admin.py).<output_prefix>: Prefix for CSV output files (e.g.,admin_test_results).--users=10: Simulates 10 concurrent users.--spawn-rate=1: Spawns 1 user per second.--run-time=10m: Runs for 10 minutes.
-
Section-Specific Commands:
- Admin:
locust -f tests/locust/locust_admin.py --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=admin_test_results
- Predictions:
locust -f tests/locust/locust_predictions.py --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=predictions_test_results
- User:
locust -f tests/locust/locust_user.py --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=user_test_results
- Budget:
locust -f tests/locust/locust_budget.py --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=budget_test_results
- Profile:
locust -f tests/locust/locust_profile.py --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=profile_test_results
- Assets:
locust -f tests/locust/locust_assets.py --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=assets_test_results
- Admin:
-
Notes:
- Adjust
--users,--spawn-rate, and--run-timebased on testing needs (e.g.,--users=100for stress testing,--run-time=8hfor soak testing). - Ensure rate limits (e.g., 100 requests/minute for general API, 10 requests/minute for
/auth/login) are respected to avoid HTTP 429/503 errors.
- Adjust
In headless mode, Locust generates CSV files with the --csv flag. For each test, two files are created:
<output_prefix>_stats.csv: Aggregated statistics (e.g., response times, request rates, error rates).<output_prefix>_stats_history.csv: Time-series data for response times and other metrics.
Example for locust_admin.py:
locust -f tests/locust/locust_admin.py --host=https://api-intellifinance.shancloudservice.com --users=10 --spawn-rate=1 --run-time=10m --headless --csv=admin_test_resultsOutput files:
admin_test_results_stats.csvadmin_test_results_stats_history.csv
- Stats CSV:
- Columns:
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,99%,99.9%,99.99%,100%. - Example:
Type,Name,Request Count,Failure Count,Median Response Time,... GET,/admin/endpoint,100,2,200,... - Use to verify KPIs (e.g., p95 response time < 500ms, error rate < 0.1%, per Section 7.2).
- Columns:
- Stats History CSV:
- Columns:
Timestamp,User Count,Type,Name,Request Count,Failure Count,Median Response Time, ... - Example:
Timestamp,User Count,Type,Name,Request Count,... 1698765432,10,GET,/admin/endpoint,50,... - Use for time-series analysis (e.g., plot response times over time).
- Columns:
- Local Storage:
- CSV files are saved in the directory where the Locust command is run.
- Organize results in a subdirectory (e.g.,
tests/results):mkdir -p tests/results mv *_test_results*.csv tests/results/
- Archiving:
- Compress results for long-term storage:
tar -czf tests/results/archive_$(date +%Y%m%d).tar.gz tests/results/*.csv
- Compress results for long-term storage:
- Cloud Storage:
- Upload to a cloud service (e.g., AWS S3) for team access:
aws s3 cp tests/results/ s3://your-bucket/locust-results/ --recursive
- Upload to a cloud service (e.g., AWS S3) for team access:
- Version Control (Optional):
- Store results in a Git repository (ensure sensitive data is excluded):
git add tests/results/*.csv git commit -m "Add Locust test results for 2025-05-11" git push
- Store results in a Git repository (ensure sensitive data is excluded):
- Key Metrics:
- p95 Response Time: Should be < 500ms (Section 7.2). Check the
95%column in_stats.csv. - Error Rate: Should be < 0.1% (Section 7.2). Calculate as
Failure Count / Request Count. - Requests/s: Ensure throughput meets expected load.
- p95 Response Time: Should be < 500ms (Section 7.2). Check the
- Visualization:
- Use tools like Excel, Pandas, or Grafana to visualize
_stats_history.csv:import pandas as pd df = pd.read_csv("admin_test_results_stats_history.csv") df.plot(x="Timestamp", y="Median Response Time", title="Response Time Over Time")
- Import CSVs into Grafana for dashboards (requires CSV data source plugin).
- Use tools like Excel, Pandas, or Grafana to visualize
- Validation:
- Compare metrics against performance KPIs (Section 7.2).
- Investigate high failure rates or slow response times using FastAPI logs:
{job="fastapi"} |= "ERROR"
To test all sections simultaneously:
- Single Locust File:
- Create a combined script importing all user classes:
from locust_admin import AdminUser from locust_predictions import PredictionsUser from locust_user import UserAuthUser from locust_budget import BudgetUser from locust_profile import ProfileUser from locust_assets import AssetsUser
- Save as
locust_combined.pyand run:locust -f tests/locust/locust_combined.py --host=https://api-intellifinance.shancloudservice.com
- Create a combined script importing all user classes:
- Multiple Instances:
- Run each script as a separate Locust instance (master-worker setup):
locust -f tests/locust/locust_admin.py --host=https://api-intellifinance.shancloudservice.com --master & locust -f tests/locust/locust_predictions.py --host=https://api-intellifinance.shancloudservice.com --worker & locust -f tests/locust/locust_user.py --host=https://api-intellifinance.shancloudservice.com --worker & locust -f tests/locust/locust_budget.py --host=https://api-intellifinance.shancloudservice.com --worker & locust -f tests/locust/locust_profile.py --host=https://api-intellifinance.shancloudservice.com --worker & locust -f tests/locust/locust_assets.py --host=https://api-intellifinance.shancloudservice.com --worker &
- Access the master UI at
http://localhost:8089to control workers.
- Run each script as a separate Locust instance (master-worker setup):
- Login Failures:
- Symptom: Console shows “Login failed: 404” or “Login failed: 500”.
- Solution:
- Verify credentials:
curl -X POST https://api-intellifinance.shancloudservice.com/auth/login -H "Content-Type: application/json" -d '{"username":"test_user","password":"test_pass"}'
- Ensure
test_userexists in the database. - Check FastAPI logs:
journalctl -u fastapi.service
- Verify credentials:
- HTTP 404 Errors:
- Symptom: Endpoints return 404 (e.g., invalid ticker, user ID).
- Solution:
- Update parameters (e.g.,
user_ids,tickers) with valid values:SELECT id FROM users; # For user IDs SELECT ticker_symbol FROM stocks; # For tickers
- Check FastAPI logs:
{job="fastapi"} |= "not found"
- Update parameters (e.g.,
- HTTP 401/403 Errors:
- Symptom: Authentication errors for endpoints.
- Solution:
- Ensure valid token in script. Test authentication:
curl -H "Authorization: Bearer <token>" https://api-intellifinance.shancloudservice.com/<endpoint>
- If endpoint is unauthenticated, remove
headersfrom the task.
- Ensure valid token in script. Test authentication:
- HTTP 429/503 Errors:
- Symptom: Rate limiting or service unavailable.
- Solution:
- Reduce
--users(e.g.,--users=5). - Check Nginx rate limits (Section 6.4):
sudo cat /etc/nginx/nginx.conf
- Monitor logs:
{job="nginx"} |= "429"
- Reduce
- HTTP 500 Errors:
- Symptom: Server or database errors.
- Solution:
- Check database connectivity (AWS RDS, Section 9.2):
{job="postgresql"} |= "ERROR" - Verify FastAPI logs:
{job="fastapi"} |= "ERROR"
- Check database connectivity (AWS RDS, Section 9.2):