Skip to content

operator: Cross-namespace Console clusterRef support#1448

Draft
david-yu wants to merge 4 commits intomainfrom
feat/cross-namespace-console-clusterref
Draft

operator: Cross-namespace Console clusterRef support#1448
david-yu wants to merge 4 commits intomainfrom
feat/cross-namespace-console-clusterref

Conversation

@david-yu
Copy link
Copy Markdown
Contributor

@david-yu david-yu commented Apr 15, 2026

Summary

Adds an optional namespace field to ClusterRef, enabling Console resources to reference a Redpanda cluster in a different namespace. This is gated to global scope mode only (operator running without --namespace).

Closes #1198

Step-by-Step Usage

Prerequisites

The operator must be running in global scope mode (the default). If you installed with --set scope.namespace="" or simply omitted the scope flag, you are in global mode. Cross-namespace references are rejected when the operator runs with --namespace.

Step 1: Deploy a Redpanda cluster in its own namespace

kubectl create namespace redpanda-cluster

helm install redpanda redpanda/redpanda \
  --namespace redpanda-cluster \
  --set statefulset.replicas=3

Or via the operator:

apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
  namespace: redpanda-cluster
spec:
  clusterSpec:
    statefulset:
      replicas: 3

Step 2: Create a Console namespace

kubectl create namespace console

Step 3: Deploy Console with a cross-namespace clusterRef

apiVersion: cluster.redpanda.com/v1alpha2
kind: Console
metadata:
  name: console
  namespace: console
spec:
  cluster:
    clusterRef:
      name: redpanda
      namespace: redpanda-cluster   # references the cluster in a different namespace

Apply it:

kubectl apply -f console.yaml

The operator resolves the Redpanda resource from the redpanda-cluster namespace, extracts connection parameters (broker addresses, admin API endpoints, TLS configuration), and deploys Console in the console namespace with those settings.

Step 4: (If using NetworkPolicies) Allow cross-namespace traffic

If your cluster enforces NetworkPolicies, Console pods need ingress to the Redpanda namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-console-to-redpanda
  namespace: redpanda-cluster
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: redpanda
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: console
      ports:
        - port: 9092   # Kafka API
        - port: 9644   # Admin API
        - port: 8081   # Schema Registry
        - port: 8082   # HTTP Proxy

Step 5: Verify Console is connected

kubectl get console -n console
kubectl logs -n console deployment/console

Console should show successful connections to the Redpanda brokers in the redpanda-cluster namespace.

Using StaticConfiguration instead (alternative)

If you prefer explicit connection strings over clusterRef, you can use staticConfiguration which already supports cross-namespace connections without any operator changes:

apiVersion: cluster.redpanda.com/v1alpha2
kind: Console
metadata:
  name: console
  namespace: console
spec:
  cluster:
    staticConfiguration:
      kafka:
        brokers:
          - redpanda-0.redpanda.redpanda-cluster.svc.cluster.local:9093
          - redpanda-1.redpanda.redpanda-cluster.svc.cluster.local:9093
          - redpanda-2.redpanda.redpanda-cluster.svc.cluster.local:9093

The clusterRef.namespace approach is simpler because the operator automatically resolves all connection details from the Redpanda resource.

Design

How it works

  1. ClusterRef gains an optional namespace field
  2. When namespace is set, the Console controller looks up the Redpanda resource in that namespace instead of the Console's own namespace
  3. The field indexer and client factory are updated to use the ClusterRef namespace for cross-namespace resolution
  4. Validation: If the operator is running in namespaced mode (--namespace=X), cross-namespace references are rejected at reconcile time with a clear error

Scope gating

Operator mode Cross-namespace ref Result
Global scope (no --namespace) namespace: other-ns Allowed
Namespaced (--namespace=my-ns) namespace: other-ns Rejected with error
Either No namespace field Same-namespace (existing behavior)

Migrating Console from Same Namespace to Separate Namespace

For users who currently run Console and Redpanda in the same namespace and want to separate them. Console is stateless, so this is a zero-downtime migration using parallel deployments.

Starting state

namespace: redpanda
  ├── Redpanda (cluster)
  └── Console (clusterRef: {name: redpanda})   ← to be moved

Step 1: Verify operator is in global scope mode

# Check the operator deployment args — should NOT have --namespace
kubectl get deployment -n redpanda-operator redpanda-operator -o jsonpath='{.spec.template.spec.containers[0].args}'

If the operator is running with --namespace, you must first reconfigure it for global scope:

helm upgrade redpanda-operator redpanda/operator --set scope.namespace=""

Step 2: Create the target namespace

kubectl create namespace console

Step 3: Copy any required Secrets to the new namespace

If Console uses TLS or SASL authentication, copy the relevant Secrets:

# Example: copy a TLS CA certificate Secret
kubectl get secret redpanda-default-cert -n redpanda -o yaml \
  | sed 's/namespace: redpanda/namespace: console/' \
  | kubectl apply -f -

Alternatively, use a Secret mirroring solution (e.g., Reflector, ExternalSecrets) to keep them in sync.

Step 4: Deploy Console in the new namespace with cross-namespace clusterRef

Export your existing Console configuration as a starting point:

kubectl get console console -n redpanda -o yaml > console-new.yaml

Edit console-new.yaml:

apiVersion: cluster.redpanda.com/v1alpha2
kind: Console
metadata:
  name: console
  namespace: console               # ← new namespace
spec:
  cluster:
    clusterRef:
      name: redpanda
      namespace: redpanda           # ← points back to the Redpanda namespace
  # ... rest of your existing spec

Apply it:

kubectl apply -f console-new.yaml

Step 5: Verify the new Console is working

# Check status
kubectl get console -n console

# Check pods are running
kubectl get pods -n console

# Check logs for successful broker connections
kubectl logs -n console deployment/console | head -20

At this point, both Console instances are running — old in redpanda, new in console.

Step 6: Update DNS, Ingress, or port-forwards to the new namespace

If you have an Ingress or Service pointing to Console:

# Update Ingress backend to point to the new namespace service
kubectl edit ingress console-ingress -n redpanda
# Change: service.name → console, service.namespace → console

# Or if using port-forward, switch:
kubectl port-forward -n console svc/console 8080:8080

Step 7: Delete the old Console

Once traffic is fully migrated:

kubectl delete console console -n redpanda

Final state

namespace: redpanda
  └── Redpanda (cluster)

namespace: console
  └── Console (clusterRef: {name: redpanda, namespace: redpanda})

Security Considerations

Cross-namespace Console deployment introduces several security implications that operators should evaluate:

1. RBAC and Secret access

When Console references a cluster in another namespace, the operator's ServiceAccount needs read access to Secrets in the target namespace (for TLS certificates, SASL credentials, etc.). In global scope mode, the operator already has cluster-wide permissions, so this is not a new escalation. However, operators should audit:

  • The operator ClusterRole already grants secrets access across all namespaces in global mode
  • No additional RBAC changes are needed for this feature

2. Credential exposure across namespace boundaries

The operator reads connection details (Kafka brokers, admin API endpoints, TLS config) from the Redpanda resource in the target namespace and injects them into Console's ConfigMap/Deployment in the Console namespace. This means:

  • Broker addresses cross the namespace boundary (low sensitivity)
  • TLS CA certificates are referenced by Secret name; Console must be able to mount them. If the Secrets are in the Redpanda namespace, a ReferenceGrant or mirrored Secret may be needed depending on your network policy setup
  • SASL credentials follow the same pattern — the operator resolves them from the cluster's namespace

3. Network policy implications

Console pods in namespace A need network access to Redpanda pods in namespace B. If you use NetworkPolicies to isolate namespaces, you must explicitly allow cross-namespace ingress as shown in Step 4 above.

4. Blast radius

If the Console namespace is compromised, an attacker gains:

  • Read access to Redpanda cluster connection parameters (broker addresses, ports)
  • Potentially access to SASL credentials if they are mirrored into the Console namespace
  • Network connectivity to the Redpanda cluster (if network policies allow it)

This is the same blast radius as running Console in the same namespace, except the compromise of the Console namespace no longer automatically compromises other workloads co-located with Redpanda.

5. Recommendation

Cross-namespace deployment is appropriate when:

  • You want workload isolation (Console crashes/resource spikes don't affect Redpanda)
  • You need separate RBAC (different teams manage Console vs Redpanda)
  • You enforce network policies per namespace and want explicit traffic rules
  • You run the operator in global scope mode already

It is NOT recommended when:

  • You run a multi-tenant cluster where namespace isolation is a security boundary
  • You cannot audit or control the operator's cluster-wide Secret access

Files changed

File Change
operator/api/redpanda/v1alpha2/common.go Add Namespace to ClusterRef, GetNamespace(), IsCrossNamespace() helpers
operator/internal/controller/console/controller.go Use ClusterRef namespace in cluster lookup; validate cross-ns refs in namespaced mode
operator/pkg/client/factory.go Use ClusterRef namespace in getV2Cluster()
operator/internal/controller/index.go Use ClusterRef namespace in field indexer
operator/config/crd/bases/*.yaml Regenerated CRDs with namespace field
.changes/unreleased/operator-Added-*.yaml Changelog entry

Test plan

  • Operator compiles: go build ./operator/...
  • CRDs regenerated with namespace field
  • Unit test: cross-namespace lookup succeeds in global mode
  • Unit test: cross-namespace lookup rejected in namespaced mode
  • Integration test: Console in namespace A connects to Redpanda in namespace B

🤖 Generated with Claude Code

david-yu and others added 4 commits April 14, 2026 20:32
Adds an optional `namespace` field to ClusterRef, enabling Console
resources to reference a Redpanda cluster in a different namespace.

This is gated to global scope mode only: when the operator runs with
--namespace (single-namespace mode), cross-namespace references are
rejected at reconcile time with a clear error message.

Changes:
- Add Namespace field to ClusterRef with GetNamespace()/IsCrossNamespace() helpers
- Console controller: use ClusterRef namespace for cluster lookup
- Client factory: use ClusterRef namespace in getV2Cluster()
- Index functions: use ClusterRef namespace for cross-namespace indexing
- Console controller: validate cross-namespace refs rejected in namespaced mode
- Regenerate CRDs with namespace field on all ClusterRef-using resources

Example:
  apiVersion: cluster.redpanda.com/v1alpha2
  kind: Console
  metadata:
    name: console
    namespace: monitoring
  spec:
    cluster:
      clusterRef:
        name: redpanda
        namespace: redpanda-cluster   # cross-namespace reference

Closes #1198

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The gci formatter expects github.com/redpanda-data/common-go imports
in the third-party group, not the internal group.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…test

staticcheck flagged vectorizedv1alpha1.AddToScheme as deprecated. Switch to
Install to match the v2 scheme registration on the line above and to silence
SA1019 in CI.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

This PR is stale because it has been open 5 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions Bot added the stale label Apr 23, 2026
@david-yu david-yu removed the stale label Apr 23, 2026
@github-actions
Copy link
Copy Markdown

This PR is stale because it has been open 5 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@david-yu
Copy link
Copy Markdown
Contributor Author

david-yu commented May 1, 2026

E2E test on GKE — passes; safe to use for blue/green

Built the operator from the PR head (commit 6a191b4c), pushed to GAR, and ran on a GKE 1.35 cluster (rp-gcp in us-east1, n2-standard-4 nodes). Cluster torn down after the test.

Setup

Namespace Resource
redpanda-operator operator (global scope, no --namespace flag)
redpanda Redpanda/rp (1 replica, no TLS, no console)
console-blue Console/consoleclusterRef{name: rp, namespace: redpanda}
console-green Console/consoleclusterRef{name: rp, namespace: redpanda}

Results

Both Console pods reached Ready 1/1 within ~25s of apply and connected to the cross-namespace cluster. Confirmed by Console logs:

"connecting to Kafka seed brokers" seed_brokers=["rp-0.rp.redpanda.svc.cluster.local.:9093"]
"successfully connected to kafka cluster" advertised_broker_count=1

The seed broker DNS contains .redpanda.svc..., which proves the operator resolved the Redpanda CR from the redpanda namespace (not the Console's own namespace) when rendering the Console ConfigMap. Created a topic test-cross-ns via rpk on the broker; both Consoles' /api/topics endpoint returned it. Both Console CRs reported readyReplicas: 1 and the operator log showed clean reconciles for both with no errors.

Question 1: Two Console deployments in different namespaces, same cluster — issues?

No issues observed. The PR only changes the lookup namespace for the Redpanda CR; everything Console-side (Deployment, Service, ConfigMap, JWT secret) is still rendered into the Console's own namespace, so there are no name collisions or shared owner-references between the two Consoles. They behave like two independent Kafka clients. Caveats worth flagging in docs:

  • JWT signing key: each Console generates its own. A user authenticated against blue gets logged out when traffic is cut to green unless the signing-key Secret is shared between namespaces.
  • SASL/TLS Secrets: must be mirrored into each Console namespace (or use a Secret-mirroring controller). The PR's security section already calls this out.
  • Enterprise Console RBAC (if enabled): roles/audit data are persisted in cluster-internal topics; two Consoles writing concurrently under the same identity could race. For OSS / read-only use this is a non-issue, which is what I tested.

Question 2: Useful for blue/green Console upgrades?

Yes — this is essentially the migration flow already documented in the PR description, and the same pattern works for version-to-version upgrades. Demonstrated by scaling console-blue to 0 replicas mid-test; console-green continued serving topic data with no reconcile churn on its CR. The cutover pattern is:

  1. Deploy green Console (new version) in a new namespace with clusterRef.namespace pointing at the existing Redpanda
  2. Verify green is healthy (Ready 1/1, broker connection in logs)
  3. Flip Ingress/Service backend (or DNS) from blue to green
  4. Delete blue Console once traffic has drained

Two preconditions worth documenting:

  • Operator must already be in global scope (this PR's gate).
  • For a seamless cutover (no forced re-login of UI users), the JWT signing key Secret must be shared between the blue and green namespaces.

Negative path

I did not re-test the namespaced-mode rejection path on GKE — the unit test TestControllerRejectsCrossNamespaceClusterRefInNamespacedMode already covers it and would have required a second helm upgrade.

Cleanup

Test namespaces, operator install, CRDs, the GAR repo for the test image, and the GKE cluster have all been torn down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow console clusterRef to reverence cluster in different namespace

1 participant