The command-line interface for the PLGenesis decentralized AI and compute network. Parity Client provides an intuitive way to interact with the network, submit tasks, execute LLM inference, and manage federated learning sessions.
- Model Discovery: List all available LLM models across the network
- Prompt Submission: Submit prompts for processing with real-time status tracking
- Async Processing: Non-blocking prompt submission with optional wait functionality
- Response Retrieval: Get completed responses with comprehensive metadata
- Session Management: Create and manage distributed federated learning sessions
- Data Partitioning: Support for 5 partitioning strategies (random, stratified, sequential, non-IID, label skew)
- Model Training: Coordinate distributed training across multiple participants
- Model Aggregation: Retrieve trained models from completed sessions
- Real-time Monitoring: Track training progress and participant status
- Task Submission: Submit compute tasks to the network
- Status Monitoring: Real-time tracking of task progress and completion
- Result Retrieval: Get task outputs and execution logs
- Batch Operations: Submit multiple tasks efficiently
- Authentication: Secure authentication with private keys
- Staking: Stake tokens to participate as a runner
- Balance Checking: Monitor token balances and staking status
- Transaction History: View account activity and earnings
- Quick Start
- Prerequisites
- Installation
- Configuration
- Usage
- Configuration Files
- API Reference
- Development
- Troubleshooting
- Contributing
- License
- Go 1.22.7 or higher (using Go toolchain 1.23.4)
- Make
- Docker (latest version recommended)
- Clone the repository:
git clone https://github.com/virajbhartiya/parity-client.git
cd parity-client- Install dependencies:
make deps- Build the client:
make buildThe client is configured using environment variables through a .env file. The application supports multiple config file locations for development and production use.
The client looks for configuration files in the following order:
- Production (after
make install):~/.parity/.env - Development/Custom path: Specified via
--config-pathflag - Local fallback:
./.envin the current directory
- Copy the example environment file:
cp .env.sample .env- Edit
.envwith your settings:
# Server Configuration
SERVER_HOST="0.0.0.0"
SERVER_PORT=3000
SERVER_ENDPOINT="/api"
# Blockchain Network Configuration
BLOCKCHAIN_RPC=https://your-blockchain-node.com
BLOCKCHAIN_CHAIN_ID=1
TOKEN_ADDRESS=0x1234567890123456789012345678901234567890
TOKEN_SYMBOL=PRTY
STAKE_WALLET_ADDRESS=0xabcdefabcdefabcdefabcdefabcdefabcdefabcd
# IPFS Storage Configuration
IPFS_ENDPOINT=http://localhost:5001
GATEWAY_URL=https://gateway.pinata.cloud
# Runner Configuration
RUNNER_SERVER_URL="http://localhost:8080"
RUNNER_WEBHOOK_PORT=8082
RUNNER_API_PREFIX="/api"
# Federated Learning Configuration
FL_SERVER_URL="http://localhost:8080"
FL_DEFAULT_TIMEOUT=30s
FL_RETRY_ATTEMPTS=3
FL_LOG_LEVEL=infoTo install the client globally and set up the production config:
make installThis will:
- Install the
parity-clientbinary to/usr/local/bin - Create the
~/.paritydirectory - Copy your current
.envfile to~/.parity/.env - If
~/.parity/.envalready exists, prompt you to confirm replacement (defaults to Yes)
After installation, you can run parity-client from any directory and it will automatically use the config from ~/.parity/.env.
- Authenticate with your private key:
parity-client auth --private-key YOUR_PRIVATE_KEY- Stake tokens to participate in the network:
parity-client stake --amount 10The federated learning system requires explicit configuration for all parameters. No default values are used to ensure complete transparency and control.
Before creating FL sessions, you need to prepare model configuration files:
neural_network_config.json:
{
"input_size": 784,
"hidden_size": 128,
"output_size": 10
}linear_regression_config.json:
{
"input_size": 13,
"output_size": 1
}# Basic neural network session
parity-client fl create-session \
--name "MNIST Classification" \
--description "Distributed MNIST digit classification" \
--model-type neural_network \
--total-rounds 10 \
--min-participants 3 \
--dataset-cid QmYourDatasetCID \
--config-file neural_network_config.json \
--aggregation-method federated_averaging \
--learning-rate 0.001 \
--batch-size 32 \
--local-epochs 5 \
--split-strategy random \
--min-samples 100 \
--alpha 0.5 \
--overlap-ratio 0.0
# Non-IID partitioning session
parity-client fl create-session \
--name "Non-IID Training" \
--model-type neural_network \
--total-rounds 5 \
--dataset-cid QmYourDatasetCID \
--config-file neural_network_config.json \
--aggregation-method federated_averaging \
--learning-rate 0.005 \
--batch-size 64 \
--local-epochs 3 \
--split-strategy non_iid \
--alpha 0.1 \
--min-samples 50 \
--overlap-ratio 0.0parity-client fl create-session-with-data ./mnist_dataset.csv \
--name "MNIST Training" \
--model-type neural_network \
--total-rounds 10 \
--min-participants 2 \
--config-file neural_network_config.json \
--aggregation-method federated_averaging \
--learning-rate 0.001 \
--batch-size 32 \
--local-epochs 5 \
--split-strategy stratified \
--min-samples 100 \
--alpha 0.5 \
--overlap-ratio 0.0# List all sessions
parity-client fl list-sessions
# List sessions by creator
parity-client fl list-sessions --creator-address 0x1234567890123456789012345678901234567890
# Get detailed session info
parity-client fl get-session SESSION_ID
# Start a session
parity-client fl start-session SESSION_ID# Display trained model
parity-client fl get-model SESSION_ID
# Save model to file
parity-client fl get-model SESSION_ID --output trained_model.jsonparity-client fl submit-update \
--session-id SESSION_ID \
--round-id ROUND_ID \
--runner-id RUNNER_ID \
--gradients-file gradients.json \
--data-size 1000 \
--loss 0.25 \
--accuracy 0.85The system supports five partitioning strategies:
-
Random (IID):
--split-strategy random- Uniform random distribution
- Requires:
--min-samples
-
Stratified:
--split-strategy stratified- Maintains class distribution
- Requires:
--min-samples
-
Sequential:
--split-strategy sequential- Consecutive data splits
- Requires:
--min-samples
-
Non-IID:
--split-strategy non_iid- Dirichlet distribution for class imbalance
- Requires:
--alpha,--min-samples - Lower alpha = more skewed distribution
-
Label Skew:
--split-strategy label_skew- Each participant gets subset of classes
- Requires:
--min-samples - Optional:
--overlap-ratiofor class overlap
parity-client llm list-models# Submit and wait for completion
parity-client llm submit --model "qwen3:latest" --prompt "Explain quantum computing" --wait
# Submit without waiting (async)
parity-client llm submit --model "llama2:7b" --prompt "Write a Python function to sort a list"
# Check status later
parity-client llm status <prompt-id>parity-client llm list --limit 10Submit compute tasks to the network:
curl -X POST http://localhost:3000/api/tasks \
-H "Content-Type: application/json" \
-d '{
"image": "alpine:latest",
"command": ["echo", "Hello World"],
"title": "Sample Task",
"description": "This is a sample task description"
}'The parity client provides comprehensive health monitoring endpoints for operational visibility:
# Command line health check
parity-client health
# HTTP health check
curl http://localhost:3000/health# Get detailed health status
parity-client health --detailed
# HTTP detailed health check
curl http://localhost:3000/health/detailed# Readiness probe (for Kubernetes deployments)
curl http://localhost:3000/health/ready
# Liveness probe (for Kubernetes deployments)
curl http://localhost:3000/health/live# Check health with custom endpoint
parity-client health --endpoint http://your-server:8080
# Set custom timeout
parity-client health --timeout 30s
# Get detailed information
parity-client health --detailedBasic health check response:
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z",
"version": "v1.0.0",
"uptime": "2h30m15s",
"services": {
"blockchain": "configured",
"ipfs": "configured",
"runner": "configured"
}
}Detailed health check response:
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z",
"version": "v1.0.0",
"uptime": "2h30m15s",
"services": {
"blockchain": {
"status": "healthy",
"last_check": "2024-01-15T10:29:55Z",
"latency": "125ms"
},
"ipfs": {
"status": "healthy",
"last_check": "2024-01-15T10:29:58Z",
"latency": "45ms"
},
"runner": {
"status": "healthy",
"last_check": "2024-01-15T10:30:00Z",
"latency": "78ms"
}
},
"config": {
"server_host": "0.0.0.0",
"server_port": 3000,
"blockchain_rpc": "https://your-blockchain-node.com",
"ipfs_endpoint": "http://localhost:5001",
"runner_url": "http://localhost:8080"
}
}Note: All health check data is real-time:
- Uptime: Actual application uptime since start
- Version: Real version from git tags and build info
- Service Status: Live connectivity tests to blockchain, IPFS, and runner services
- Latency: Actual response times from service health checks
- Configuration: Real configuration values from your environment
{
"input_size": 784,
"hidden_size": 128,
"output_size": 10
}{
"input_size": 3072,
"hidden_size": 256,
"output_size": 10
}{
"input_size": 13,
"output_size": 1
}{
"input_size": 2048,
"hidden_size": 512,
"output_size": 100
}Enable differential privacy in your sessions:
parity-client fl create-session \
--name "Private Training" \
--model-type neural_network \
--config-file neural_network_config.json \
--enable-differential-privacy \
--noise-multiplier 0.1 \
--l2-norm-clip 1.0 \
# ... other required flags| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/federated-learning/sessions | Create FL session |
| GET | /api/v1/federated-learning/sessions | List FL sessions |
| GET | /api/v1/federated-learning/sessions/{id} | Get session details |
| POST | /api/v1/federated-learning/sessions/{id}/start | Start FL session |
| GET | /api/v1/federated-learning/sessions/{id}/model | Get trained model |
| POST | /api/v1/federated-learning/model-updates | Submit model updates |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/llm/models |
List all available LLM models |
| POST | /api/llm/prompts |
Submit a prompt for LLM processing |
| GET | /api/llm/prompts/{id} |
Get prompt status and response |
| GET | /api/llm/prompts |
List recent prompts |
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/tasks | Create task |
| GET | /api/tasks | List all tasks |
| GET | /api/tasks/{id} | Get task details |
| GET | /api/tasks/{id}/status | Get task status |
| GET | /api/tasks/{id}/logs | Get task logs |
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/storage/upload | Upload file to IPFS |
| GET | /api/storage/download/{cid} | Download file by CID |
| GET | /api/storage/info/{cid} | Get file information |
| POST | /api/storage/pin/{cid} | Pin file to IPFS |
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/runners | Register runner |
| POST | /api/runners/heartbeat | Send heartbeat |
| Method | Endpoint | Description |
|---|---|---|
| GET | /health | Basic health check |
| GET | /health/detailed | Detailed health information |
| GET | /health/ready | Readiness probe |
| GET | /health/live | Liveness probe |
The project includes several helpful Makefile commands for development:
make deps # Download dependencies
make build # Build the application
make run # Run the application
make clean # Clean build files
make fmt # Format code using gofumpt or gofmt
make imports # Fix imports formatting
make format # Run all formatters (gofumpt + goimports)
make lint # Run linting
make format-lint # Format code and run linters in one step
make watch # Run with hot reload (requires air)
make install # Install parity command globally
make uninstall # Remove parity command from system
make help # Display all available commandsFor hot reloading during development:
# Install air (required for hot reloading)
make install-air
# Run with hot reload
make watch-
Configuration Issues
- Ensure your
.envfile exists and is properly configured - For development: Check
.envin your project directory - For installed client: Check
~/.parity/.env - Verify all required environment variables are set
- Ensure your
-
Federated Learning Issues
- Missing required flags: All FL parameters must be explicitly provided
- Invalid model config: Ensure your model configuration JSON is valid
- Partition validation errors: Check strategy-specific requirements:
non_iidrequires positive--alphavalue- All strategies require positive
--min-samples
- Learning rate errors: Must be between 0 and 1.0
-
Connection Issues
- Ensure your blockchain RPC URL is correct and accessible
- Check your internet connection and firewall settings
- Verify FL server URL in configuration
-
Authentication Errors
- Verify your private key is correct
- Ensure you have sufficient tokens for staking
Error: model configuration is required - please provide via --config-file flag
Solution: Create a model configuration JSON file and use --config-file
Error: alpha parameter is required for non_iid partitioning strategy
Solution: Provide --alpha parameter when using --split-strategy non_iid
Error: learning rate must be positive, got 0.000000
Solution: Provide a positive learning rate: --learning-rate 0.001
Error: training configuration is incomplete
Solution: Ensure all required training parameters are provided:
--learning-rate--batch-size--local-epochs
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Install git hooks for automatic formatting and linting:
make install-hooks
- Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please ensure your PR:
- Follows the existing code style
- Includes appropriate tests
- Updates documentation as needed
- Describes the changes in detail
This project is licensed under the MIT License - see the LICENSE file for details.