A Kubernetes controller for managing Helm-based applications across multiple zones using decentralized control plane (DCP). The controller provides automated deployment, synchronization, and recovery strategies for applications deployed in micro data centers.
The AnyApplication Controller is a custom Kubernetes operator that extends Kubernetes capabilities to manage applications across multiple zones or micro data centers. It enables:
- Multi-Zone Deployment: Deploy Helm charts across multiple zones with configurable placement strategies
- Automated Synchronization: Keep applications synchronized across zones with retry and backoff policies
- Recovery Management: Automatic recovery with configurable tolerance and retry mechanisms
- Placement Strategies: Support for both Local and Global placement strategies
- Ownership Transfer: Manage application ownership and state transitions across zones
- Helm Integration: Native support for Helm charts with flexible configuration options
- Zone-Based Management: Deploy and manage applications across multiple zones with independent versioning
- Flexible Placement: Choose between Local (zone-specific) or Global (multi-zone) placement strategies
- Self-Healing: Automated sync policies with prune, self-heal, and retry capabilities
- State Tracking: Comprehensive status tracking for deployments, placements, and ownership transfers
- Helm Support: Full Helm chart integration with values, parameters, and CRD control
The AnyApplication CRD defines how applications should be deployed across zones. Key specifications include:
- Source: Helm chart repository and configuration
- Zones: Number of zones where the application should be deployed
- PlacementStrategy: Local (single zone) or Global (multi-zone) deployment
- SyncPolicy: Automated synchronization with prune, self-heal, and retry options
- RecoverStrategy: Tolerance and retry configuration for failure recovery
- Local: Application is deployed in a single zone based on zone affinity
- Global: Application is deployed across multiple zones for high availability
The controller tracks applications through various states:
Unknown: State cannot be determinedNew: Initial state when application is createdPlacement: Application placement is being determinedOperational: Application is running successfullyRelocation: Application is being moved between zonesFailure: Application has encountered an error
Here's a simple example deploying an NGINX ingress controller:
apiVersion: dcp.hiro.io/v1
kind: AnyApplication
metadata:
name: nginx-app
namespace: test
spec:
source:
helm:
repository: https://helm.nginx.com/stable
chart: nginx-ingress
version: 2.0.1
namespace: test
zones: 1
placementStrategy:
strategy: Global
recoverStrategy:
tolerance: 1
maxRetries: 3- go version v1.24.0+
- docker version 17.03+.
- kubectl version v1.11.3+.
- Access to a Kubernetes v1.11.3+ cluster.
Make sure proper context is configured:
kubectl config use-context <context>Install the CRDs into the cluster:
make installRun the manager:
make runRun sample application You can apply the examples from the config/samples:
kubectl apply -f config/samples/nginx.yamlBuild and push your image to the location specified by IMG:
make docker-build docker-push IMG=ghcr.io/hiro-microdatacenters-bv/dcp-application-controller-dev:test-tagNOTE: This image ought to be published in the personal registry you specified. And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work.
Configure values:
cat > values.yaml <<EOF
image:
repository: "ghcr.io/hiro-microdatacenters-bv/dcp-application-controller-dev"
tag: "test-tag"
configuration:
runtime:
zone: myzone
EOFInstall helm chart:
helm install anyapp ./charts/anyapplication --values ./values.yamlRun sample application You can apply the samples (examples) from the config/sample:
kubectl apply -f config/samples/nginx.yamlDelete the instances (CRs) from the cluster:
kubectl delete -f config/samples/nginx.yamlUnnstall helm chart:
helm uninstall anyappDelete the APIs(CRDs) from the cluster:
make uninstallhelm repo add anyapp-repo https://hiro-microdatacenters-bv.github.io/anyapplication-controller/helm-charts/
helm repo update
helm install anyapp anyapp-repo/anyapplication --version 0.2.5 --values ./values.yamlDeploy across multiple zones for high availability:
spec:
zones: 3 # Deploy to 3 zones
placementStrategy:
strategy: Global
recoverStrategy:
tolerance: 1 # Tolerate 1 zone failure
maxRetries: 3 # Retry failed deployments 3 timesPass custom parameters to Helm charts:
spec:
source:
helm:
repository: https://charts.example.com
chart: my-app
version: 1.0.0
namespace: production
values: |-
service:
type: LoadBalancer
resources:
limits:
memory: "512Mi"Check the status of your AnyApplication:
kubectl get anyapplications -n <namespace>
kubectl describe anyapplication <name> -n <namespace>The status section provides detailed information about:
- Ownership and global state
- Zone-specific deployment status
- Condition history with timestamps
- Version tracking per zone
The controller follows the Kubernetes operator pattern:
- Reconciliation Loop: Continuously watches AnyApplication resources
- Job Management: Creates async jobs for deployment, placement, and ownership tasks
- Zone Coordination: Manages application state across multiple zones
- Helm Integration: Generates and applies Helm manifests per zone
- Status Updates: Tracks conditions and versions for each zone
For detailed API documentation, see docs/api-reference/anyapplication.md
Application stuck in Placement state
- Check zone availability and node affinity
- Verify placement strategy configuration
Sync failures
- Review retry configuration in syncPolicy
- Check Helm chart repository accessibility
- Examine controller logs:
kubectl logs -n <controller-namespace> deployment/anyapplication-controller
Zone version mismatches
- Verify sync policy is configured correctly
- Check for manual interventions in zones
- Review zone status conditions
We welcome contributions! Here's how to get started:
- Clone the repository
- Install dependencies:
go mod download - Install CRDs:
make install
Run the controller locally against a Kubernetes cluster:
make install # Install CRDs
make run # Run controller locally- Unit tests:
make test - E2E tests:
make test-e2e(requires Kind cluster) - Linting:
make lintormake lint-fixfor auto-fixes
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Ensure
make testandmake lintpass - Submit a pull request with a clear description
For detailed information on how to contribute, please refer to our contributing guidelines.
NOTE: Run make help for more information on all potential make targets
More information can be found via the Kubebuilder Documentation
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.