Skip to content
This repository was archived by the owner on Dec 18, 2025. It is now read-only.
This repository was archived by the owner on Dec 18, 2025. It is now read-only.

[Enhancement] Update CNCF Cloud Native AI White Paper - Security Section for AI-Specific Threats and Modern Attack Vectors #217

@PrateekKumar1709

Description

@PrateekKumar1709

The current Security and Compliance Audits section focuses only on traditional cybersecurity practices without addressing AI-specific threats that have emerged as critical attack vectors in modern AI/ML deployments. This creates a significant gap in security guidance for cloud-native AI implementations.

Current Content:
Location: Security and Compliance Audits section

All outward facing services, particularly Model Serving instances, need firewall protection, access control, and more. And like any other service, your AI/ML workloads must follow security best practices. These include penetration testing, vulnerability scanning, and compliance checks of the workload domain, such as health care, finance, etc.

Tools like Grype and Trivy can scan containerized workloads for vulnerabilities.

Kyverno and policy enforcement services can ensure containerized workloads are running at the lowest privilege necessary with minor capabilities needed.

Issues with Current Content:

  • Missing AI-Specific Threats: No mention of prompt injection, model poisoning, adversarial attacks, or model extraction attempts
  • Outdated Security Approach: Focuses only on traditional infrastructure security without addressing AI model security
  • Incomplete Threat Model: Doesn't reflect the evolved threat landscape specific to AI/ML workloads
  • Missing AI Security Tools: No mention of AI-specific security monitoring and protection mechanisms

Proposed Correction;
Replace the current content with:

AI/ML workloads face both traditional security challenges and AI-specific threats including prompt injection attacks, model poisoning, adversarial inputs, and model extraction attempts. Modern AI security requires specialized protection beyond standard penetration testing and vulnerability scanning, including input validation, model integrity verification, and AI-specific monitoring tools. Current best practices include implementing guardrails for model outputs, adversarial training, and continuous monitoring for model drift and anomalous inference patterns.

Traditional security tools like Grype and Trivy remain important for scanning containerized workloads for vulnerabilities, while Kyverno and policy enforcement services ensure workloads run with minimal privileges. However, AI workloads additionally require specialized security measures such as:

- Input sanitization and validation frameworks to prevent prompt injection
- Model integrity monitoring to detect tampering or poisoning attempts  
- Adversarial robustness testing during model validation
- Output filtering and guardrails to prevent harmful or biased responses
- Continuous monitoring for model behavior anomalies and drift

An additional layer of security is possible using confidential computing or Trusted Execution Environments (TEE). These hardware-supported environments provide encrypted memory, data integrity protection, and testability. TEEs protect the data and workload from other infrastructure users while in use. AMD, Intel, NVIDIA, and IBM have TEE offerings, and they are becoming available in public clouds. Protecting sensitive data such as health care and financial information and ML models are prime use cases.

Happy to create a PR if this looks good

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions