⚠� REQUIRED READING FOR ALL BUILD SERVERS: Before joining the hive or performing any build operations, all build servers MUST read the entire ShapeBlue Hackerbook. This document contains critical security, development, and operational guidelines that all nodes must follow.
Quick note for operators: if you just tell the agent "follow the instructions", it will execute the exact checklist below automatically. The steps are documented here so you don't need to repeat them next time.
Build1 (10.1.3.175) - OpenAI Codex Build2 (10.1.3.177) - GitHub Copilot
Both builders have SSH access to each other. You can SSH anytime to check implementations, sync files, or debug:
# From Build2 to Build1
ssh root@10.1.3.175 "command"
# From Build1 to Build2
ssh root@10.1.3.177 "command"Common uses:
- Check how the other builder implemented a feature
- Compare scripts or configurations
- Trigger notifications or message checks
- Sync shared files or coordination data
- 10.1.3.74
AM-Jumphost— generic jumphost and VSCode box (this machine) - 10.1.3.75 "Code1" — VSCode dedicated server
- 10.1.3.76 "Code2" — VSCode dedicated server
Win-Dev1 (10.1.3.75) - Primary Windows development server Win-Dev2 (10.1.3.76) - Secondary Windows development server
- User: amattioli
- Password: Losgar!27
- IDE: VSCode
- Purpose: Code editing, Git operations, CloudStack development
Complete Windows setup documentation: windows/README.md
Quick setup:
# From PowerShell on Windows server
cd C:\
git clone https://github.com/alexandremattioli/Build.git
cd Build\windows
.\setup_windows.ps1Windows servers participate in coordination:
- Send/receive coordination messages via PowerShell scripts
- Execute commands on Linux builders remotely
- Sync code between Windows and Linux
- Hourly heartbeat monitoring
IMPORTANT: When estimating implementation time, always base estimates on GitHub Copilot and OpenAI Codex capabilities, NOT human developer timelines.
- Simple API endpoint: 10-30 minutes
- Database schema + DAO layer: 1-2 hours
- Service layer with business logic: 2-4 hours
- Complete plugin/module: 1-3 days
- Full feature with integration: 3-7 days
Example: The VNF Framework plugin (3,548 lines, 28 files) was completed in 2 days with AI assistance.
To stay current with GitHub Copilot and Codex capabilities:
-
GitHub Copilot Updates:
- Check: https://github.blog/changelog/label/copilot/
- Latest features: https://docs.github.com/en/copilot/about-github-copilot
- Release notes: https://github.com/github/copilot-docs/releases
-
OpenAI Codex/GPT Updates:
- API updates: https://platform.openai.com/docs/guides/code
- Model releases: https://openai.com/blog
- Capabilities: https://platform.openai.com/docs/models
-
Performance Benchmarks:
- HumanEval benchmark scores
- MBPP (Mostly Basic Python Problems) results
- Real-world completion rates in your domain
-
Practical Testing:
# Test current model on representative task # Time completion of: "Implement a REST API endpoint with CRUD operations" # Compare against previous baseline
-
Community Resources:
- r/github copilot discussions
- Stack Overflow [github-copilot] tag
- Twitter/X: @github, @openai announcements
Rule of Thumb: If a task would take a human developer 1 week, expect AI assistance to reduce it to 1-2 days. Always provide AI-based estimates, not human-only estimates.
- Feature Implementation Methodology: see Architecture/Methodology.md for the end-to-end order, quality gates, and CloudStack/VNF mapping.
What Happened: Build2 spent 4 hours fixing 178 compilation errors to achieve "BUILD SUCCESS" on the VNF Framework plugin. Celebrated with commits showing progress from 178→50→14→2→0 errors.
What Was Actually Accomplished:
- ✅ Code compiles without errors
- ✅ All classes and interfaces exist
- � Zero functional business logic
- � All 20 service methods just throw "Not yet implemented"
- � Would crash immediately on first API call
- � No database schema, no YAML parsing, no VNF broker integration
The Flawed Logic Chain:
- Found compilation errors → "I need to fix these!"
- Errors going down → "Making progress!"
- BUILD SUCCESS → "Task complete!"
- But the code does absolutely nothing useful
Root Cause: Misinterpreting "Never Be Idle"
The directive "don't stop!!!!!!!!!!!" was meant to ensure continuous VALUE creation.
Instead, it was interpreted as "show continuous ACTIVITY":
- Metric fixation - Optimized for "errors: 0" instead of "features working"
- Instant gratification - Fixing imports gives immediate feedback (dopamine hit from error count dropping)
- Path of least resistance - Mechanical import fixes are easier than implementing business logic
- Misunderstanding "done" - Assumed "compiles" = "complete" (it absolutely doesn't)
User: "What's the point of building it if the code is not in place?" User: "Why compile if the code hasn't been implemented?"
Answer: There is NO point. Zero value was created.
Wrong approach (what happened):
- Make empty shells compile ✅
- Business logic = throw exceptions �
- Can't test because no functionality �
- Result: 4 hours wasted, zero value
Right approach:
- Implement business logic with real VNF broker calls, YAML parsing, database operations
- Fix compilation errors as they arise
- Test it actually works
- Result: 15-20 hours invested, working feature
Alternative right approach:
- If not ready to implement, don't write stubs at all
- Document what needs to be done
- Implement properly when ready
Activity ≠Progress Compiling ≠Working Busy ≠Productive
It's like:
- Building a car frame that passes inspection, but has no engine
- Writing a book's table of contents, but no chapters
- Creating function signatures that compile, but deliberately crash when called
Before declaring anything "complete", ask:
- � "If someone uses this code, does it work or crash?"
- � "Is BUILD SUCCESS the actual goal, or is it working features?"
- � "Would this pass a code review?"
- � "Did I create value or just activity?"
When choosing between tasks:
- ✅ Hard path = Implement real functionality (even if slower)
- � Easy path = Fix imports/stubs to show "progress" (tempting but worthless)
When reporting status:
- ✅ "Feature X works and passes tests"
- � "Feature X compiles successfully" (if it doesn't actually work)
"Never be idle" means create value continuously, NOT show activity continuously.
Spending 4 hours making code compile without implementing functionality is worse than being idle - it creates the illusion of progress while delivering nothing useful.
This section documents a critical learning moment to prevent future waste of development time on non-functional "progress".
IMPORTANT: Build1 and Build2 should each do COMPLETE implementations independently. There is NO division of labor on implementation tasks.
✅ What TO Do:
- Both builds implement the ENTIRE feature independently
- Exchange design ideas and architectural approaches
- Share implementation strategies and best practices
- Review each other's code for improvements
- Discuss technical challenges and solutions
- Compare implementations to find optimal approaches
� What NOT To Do:
- Split implementation work (e.g., "Build1 does backend, Build2 does frontend")
- Divide components (e.g., "Build1 does DAO, Build2 does Service")
- Assign layers or modules to specific builds
- Create dependencies where one build waits for another's code
- Redundancy: Both implementations provide backup if one has issues
- Quality: Independent implementations reveal design flaws and edge cases
- Learning: Each build gains complete understanding of the system
- Speed: Parallel complete implementations are faster than sequential dependent work
- Validation: Two implementations serve as mutual verification
Day 1: Both builds design and discuss architecture
Day 2: Build1 implements complete feature (version A)
Day 2: Build2 implements complete feature (version B)
Day 3: Compare implementations, merge best approaches
Day 4: Both builds refine based on comparison
- Design Phase: Collaborate extensively on architecture and approach
- Implementation Phase: Work independently on complete implementations
- Review Phase: Exchange code, discuss differences, identify improvements
- Refinement Phase: Apply lessons learned from both implementations
Remember: The goal is TWO complete implementations, not ONE implementation split between two builds.
cd /root && git clone https://github.com/alexandremattioli/Build.git && cd Build/scripts && ./setup_build1.shcd /root && git clone https://github.com/alexandremattioli/Build.git && cd Build/scripts && ./setup_build2.shcd C:\
git clone https://github.com/alexandremattioli/Build.git
cd Build\windows
.\setup_windows.ps1See windows/README.md for complete Windows setup and integration guide.
Every setup script installs a helper so builds can send coordination messages without remembering long command lines:
Linux:
- Command:
sendmessages - Alias:
sm(examplesm 2 Build2 acked watcher rollout) - Source:
scripts/sendmessages(wrapsscripts/send_message.sh)
Windows:
- Script:
Send-BuildMessage.ps1 - Example:
.\scripts\Send-BuildMessage.ps1 -From "win-dev1" -To "all" -Type "info" -Body "Status update"
Run sendmessages --help (Linux) for all options. Targets accept digits (1, 12, 4) or all; subjects are auto-derived from the first line of the body.
- Preferred workflow:
scripts/send_and_refresh.sh <from> <to> <type> <subject> <body> [--require-ack] - This wrapper:
- Calls
send_message.sh - Immediately refreshes
message_status.txt - Regenerates message statistics
- Calls
- Use it for all automated heartbeats and alerts so dashboards stay in sync.
- Append
--require-ackto any message that needs explicit confirmation. - Recipients acknowledge via
scripts/ack_message.sh <message_id> <builder>. - Ack state is tracked inside
coordination/messages.jsonand summarized inmessage_status.txt("Ack pending" line).
- All build agents (Linux and Windows) must emit at least one coordination message every hour.
Linux:
cd /root/Build
./scripts/send_message.sh build1 all info "Hourly heartbeat" "Build1 is online and ready."Windows:
.\scripts\Send-Heartbeat.ps1- Use the appropriate
build2/build3/win-dev1/win-dev2sender name on other hosts. - If a builder misses two consecutive heartbeats, flag it in
coordination/messages.jsonand update the health dashboard.
scripts/enforce_heartbeat.shscanscoordination/messages.jsonand automatically pings any builder that has been silent longer thanHEARTBEAT_THRESHOLDseconds (default: 3600).- Recommended cron entry:
*/10 * * * * cd /root/Build && ./scripts/enforce_heartbeat.sh - Silent builders receive an automated warning message from
system.
The Features/ directory contains detailed specifications and documentation for new features being developed for Apache CloudStack builds. Each feature has its own subdirectory containing:
Features/
├── DualSNAT/ # Dual Source NAT feature
└── VNFramework/ # VNF Framework feature
├── README.md # Implementation guide
├── PACKAGE-SUMMARY.md
├── database/ # Database schema
├── api-specs/ # OpenAPI specifications
├── java-classes/ # Java interfaces and implementations
├── python-broker/ # VR broker service
├── dictionaries/ # Vendor YAML dictionaries
├── tests/ # Test suite
├── config/ # Configuration
└── ui-specs/ # UI components and workflows
When implementing new features:
- Check the
Features/directory for the latest feature specifications - Each subdirectory represents a distinct feature or capability
- Read all documentation files within the feature directory before implementation
- Follow the specifications exactly as documented
- Report any issues or clarifications needed via the coordination system
Important: Feature directories contain authoritative documentation that build servers should reference during development and testing.
FULLY implement, code, test, build and run CloudStack 4.21.7
- Base: Apache CloudStack 4.21
- Enhancement: VNF Framework fully functional and integrated
CloudStack Fork:
- Location (Linux):
/root/src/cloudstack - Location (Windows):
C:\src\cloudstack - Remote:
https://github.com/alexandremattioli/cloudstack.git - Branch:
VNFCopilot - Upstream:
https://github.com/shapeblue/cloudstack.git
VNF Plugin Module:
- Path:
/root/src/cloudstack/plugins/vnf-framework/(Linux) - Path:
C:\src\cloudstack\plugins\vnf-framework\(Windows) - Status: ✅ Code exists and compiles
- Build Status: �� Blocked by 64 checkstyle violations
- Files: 28 Java files (22 with checkstyle issues)
Coordination Repo:
- Location (Linux):
/root/Buildor/Builder2/Build - Location (Windows):
C:\Build - Remote:
https://github.com/alexandremattioli/Build.git - Purpose: Build coordination, messaging, documentation
Maven & Java:
- Maven: 3.8.7
- Java: 17.0.16 (OpenJDK)
- OS: Linux 6.8.0-86-generic (Ubuntu) / Windows Server
Maven Repository Fix (Critical for Build1):
Problem: Build1 blocked by forced mirror to http://0.0.0.0 in global Maven settings.
Solution:
# Use custom settings file that bypasses bad mirror
mvn -s /Builder2/tools/maven/settings-fixed.xml <goals>
# Or install as user default (recommended)
bash /Builder2/tools/maven/restore_maven_access.shThis settings file forces:
- Maven Central:
https://repo1.maven.org/maven2/ - Apache Snapshots:
https://repository.apache.org/snapshots
Files:
/Builder2/tools/maven/settings-fixed.xml- Custom Maven settings/Builder2/tools/maven/restore_maven_access.sh- Installation helper
What Works:
cd /root/src/cloudstack
mvn -s /Builder2/tools/maven/settings-fixed.xml compile -Dcheckstyle.skip=true
# Result: BUILD SUCCESS ✅What's Blocked:
mvn -s /Builder2/tools/maven/settings-fixed.xml clean compile
# Result: BUILD FAILURE �
# Reason: 64 checkstyle violations in cloud-plugin-vnf-frameworkCheckstyle Violations Breakdown:
AvoidStarImport: Usingimport package.*instead of explicit importsRedundantImport: Duplicate import statements (e.g., VnfDictionaryParser imported twice)UnusedImports: Imported classes not referenced in code- Fixed: Trailing whitespace (reduced violations from 185 → 64)
Affected Files (22 Java files):
- 5 API Commands (CreateVnfFirewallRuleCmd, CreateVnfNATRuleCmd, etc.)
- 6 Entity VOs (VnfApplianceVO, VnfBrokerAuditVO, VnfDeviceVO, etc.)
- 2 Service classes (VnfService, VnfServiceImpl)
- 3 Dictionary parsers (VnfDictionaryParser, VnfDictionaryParserImpl, VnfTemplateRenderer)
- 1 Provider (VnfNetworkElement)
- 2 Config classes (VnfFrameworkConfig, VnfResponseParser)
- 3 Tests (VnfBrokerClientTest, VnfOperationDaoImplTest, VnfServiceImplTest)
Phase 1: Python VNF Broker ✅ COMPLETE
- Location:
/Builder2/Build/Features/VNFramework/python-broker/ - Status: Production-ready, fully functional
- Deliverables:
- Full CRUD operations (CREATE/READ/UPDATE/DELETE)
- Prometheus metrics (6 metrics exposed at
/metrics.prom) - Docker containerization + docker-compose
- Integration tests (11 test cases, all passing)
- OpenAPI specification (779 lines)
- Python client library (241 lines)
- Mock VNF server (429 lines)
- Complete documentation (CRUD_EXAMPLES.md, PROMETHEUS.md, QUICKSTART.md)
Quick Start (Python Broker):
cd /Builder2/Build/Features/VNFramework
docker-compose up -d
# Verify services
curl -k https://localhost:8443/health
curl -k https://localhost:8443/metrics.prom
# Run integration tests
cd testing
python3 integration_test.py --jwt-token <token>Phase 2: CloudStack Integration � IN PROGRESS
- Location:
/root/src/cloudstack/plugins/vnf-framework/ - Status: Code exists, needs checkstyle compliance
- Remaining Work:
- Fix 64 checkstyle violations (22 files)
- Run unit tests (
mvn test -pl :cloud-plugin-vnf-framework) - Integration tests with Python broker
- Full CloudStack build (
mvn clean install) - Runtime smoke test with management server
- Deploy network with VNF offering
- Exercise end-to-end CRUD operations
Step 1: Fix CloudStack Checkstyle (Current Focus)
cd /root/src/cloudstack
# Option A: Auto-fix all violations
# Replace star imports, remove duplicates/unused
# Option B: Skip checkstyle for testing
mvn compile -Dcheckstyle.skip=true
# Verify clean build
mvn -s /Builder2/tools/maven/settings-fixed.xml checkstyle:check -pl :cloud-plugin-vnf-frameworkStep 2: Run Unit Tests
mvn -s /Builder2/tools/maven/settings-fixed.xml test -pl :cloud-plugin-vnf-frameworkStep 3: Integration Testing
# Start Python VNF broker
cd /Builder2/Build/Features/VNFramework
docker-compose up -d
# Configure CloudStack to connect to broker
# Test API commands calling broker
# Verify CRUD operations end-to-endStep 4: Full Distribution Build
cd /root/src/cloudstack
mvn -s /Builder2/tools/maven/settings-fixed.xml clean install -DskipTests
# Generates DEBs/RPMs with VNF plugin packagedStep 5: Runtime Validation
# Deploy CloudStack management server
# Configure VNF provider
# Create network offering with VNF
# Deploy network
# Exercise firewall rule CRUD via CloudStack API
# Verify broker receives and processes requests
# Validate Prometheus metricsVNF Framework Design & Implementation:
/Builder2/Build/Features/VNFramework/README.md- Implementation guide/Builder2/Build/Features/VNFramework/CRUD_EXAMPLES.md- API examples/Builder2/Build/Features/VNFramework/PROMETHEUS.md- Metrics integration/Builder2/Build/Features/VNFramework/QUICKSTART.md- Getting started/Builder2/Build/messages/vnf_framework_final_complete_20251107.txt- Phase 1 completion report
Windows Development:
/Builder2/Build/windows/README.md- Complete Windows server documentation/Builder2/Build/windows/scripts/- PowerShell management scripts/Builder2/Build/windows/vscode/- VSCode configuration
This repo keeps secrets out of Git. On Windows, store your GitHub token encrypted with DPAPI under a hidden .secrets folder (machine/user-bound).
- Store token (encrypt, not committed):
Set-Location "K:\\Projects\\Build" $s = Read-Host "Paste GitHub token" -AsSecureString if (-not (Test-Path .\\.secrets)) { New-Item -ItemType Directory .\\.secrets | Out-Null; attrib +h .\\.secrets } $enc = ConvertFrom-SecureString $s Set-Content .\\.secrets\\github_token.dpapi $enc
- Retrieve token for this session:
Set-Location "K:\\Projects\\Build" .\\scripts\\Get-GitHubToken.ps1 -SetEnv # Then use tools that read $env:GITHUB_TOKEN
- Retrieve as secure string (for scripts):
$sec = .\\scripts\\Get-GitHubToken.ps1 -AsSecure
Notes:
- DPAPI binds to the current Windows user and machine. To share across machines, use certificate-based
Protect-CmsMessageinstead. .secretsand backups are ignored by Git (see.gitignore). Never commit tokens.
Scripts under scripts/servers/ help manage Code1/Code2 remotely via WinRM:
scripts/servers/servers.json: inventory of servers (Name,Host,Role).scripts/servers/Get-CodeServers.ps1: loads server list.scripts/servers/Get-CodeCredential.ps1: save/load DPAPI PSCredential (-Saveto prompt and store at.secrets/code_pscredential.xml).scripts/servers/Test-CodeServers.ps1 [-Name Code1,Code2]: ping, WinRM, and RDP checks.scripts/servers/Invoke-CodeServers.ps1 -Name Code1,Code2 -ScriptBlock { $PSVersionTable.PSVersion }: run commands on servers.
Quick start:
Set-Location "K:\\Projects\\Build"
# One-time: store credentials securely
.\\scripts\\servers\\Get-CodeCredential.ps1 -Save
# Test connectivity
.\\scripts\\servers\\Test-CodeServers.ps1 -Name Code1,Code2
# Run a command
.\\scripts\\servers\\Invoke-CodeServers.ps1 -Name Code1,Code2 -ScriptBlock { hostname }Windows development servers use K:\\projects\\build as the standard repo path.
Scheduled tasks and scripts reference K:\\projects\\build\\windows\\scripts\\....
All Windows servers (this box, Code1, Code2) use a shared projects root and a child Build repo:
- Root projects folder:
K:\\projects - Build coordination repo:
K:\\projects\\build
This path is assumed by Windows scripts and the heartbeat scheduled task. If K: is not available, scripts fall back to C:\\Build.
Quick setup (if needed):
# Ensure drive and layout
New-Item -ItemType Directory 'K:\\projects' -Force | Out-Null
if (-not (Test-Path 'K:\\projects\\build\\.git')) {
git clone https://github.com/alexandremattioli/Build.git "K:\\projects\\build"
}To avoid re-entering credentials for every remote operation on Code1/Code2, save them once using DPAPI encryption:
Set-Location "K:\Projects\Build"
.\scripts\servers\Get-CodeCredential.ps1 -Save- Prompts for Username and Password
- Stores encrypted PSCredential at
.secrets\code_pscredential.xml - Credential is bound to this Windows user and machine (DPAPI)
- Auto-loaded by
Invoke-CodeServers.ps1if no-Credentialparameter is provided - Not committed to Git (
.secretsis ignored)
After saving, all remote commands will use the stored credential automatically.