RFC-0024 FedRAMP Rev5 Machine-Readable Packages #114
Replies: 36 comments 23 replies
-
|
Since there is a Background section that addresses FedRAMP's OSCAL history, I want to offer the following additions and corrections. When did Filming Start?
What's Up with the Ratings?
Industry showed overwhelming interest in OSCAL from 2018 through 2024 as evidenced by consistently high attendance of NIST and FedRAMP OSCAL events from Agencies, CSPs and GRC tool vendors. The annual NIST OSCAL conference typically filled large auditoriums with participants overwhelmingly focused on FISMA and FedRAMP use cases. Industry says, "Show Me the Money!"Most stakeholders - especially CSPs - have been clear for years: A "considerable promise" isn't sufficient for their boards and shareholders. Short of a mandate, they couldn't justify an investment until the FedRAMP PMO was able to demonstrate:
(Extra emphasis on the word "demonstrate".) It's Showtime!A few brave souls took a chance on OSCAL in 2022:
At that time the PMO:
Industry's take-a-way:
Everybody started to hold their breath waiting for PMO's readiness to improve. FedRAMP Automation: TNG
Cancelled on a Cliff Hanger
If we are going to make assertions about industry's interest in OSCAL, we should do so with a more complete accounting of historical events. |
Beta Was this translation helpful? Give feedback.
-
Grateful for an Edgy ReimaginingOn the whole, I am excited that PMO is considering a move to machine-readable packages and welcome the initiative. Please accept the comments below as helping to focus resources and avoid unnecessary challenges as the community moves toward this important goal!
As I mentioned in the above comment, industry could only justify a voluntary investment in OSCAL after the PMO to demonstrates readiness. Mandating machine-readable deliverables will likely get it done; however, PMO needs to be ready to receive and process this content. There are a few tools that can start delivering FedRAMP packages to the PMO in OSCAL format immediately. PMO MUST be ready. Industry needs a better experience compared to the 2022 submissions. 57 Channels
While I appreciate the spirit of PMO's intention to allow formats other than OSCAL, I'd caution the PMO not to trivialize what it takes to ingest and process multiple, competing formats when the subject matter is complex security information. For comparison, when the PMO used to accept raw scanner tool output from CSPs, there were ~20 different formats. All very flat with about 15 fields that were relevant to PMO processing and ConMon reporting. It took a dedicated team of 5 FTEs and a bespoke tool to ingest and process. Security package data is orders of magnitude more complex. Realistically, each new format will require a team of developers several months to adjust tooling. That challenge is compounded if PMO receives several new formats simultaneously during the initial transition period. Regardless of other formats, there is no doubt OSCAL will be one of the formats PMO has to support. Too many CSPs have invested in it, and for those that need to comply with multiple frameworks, it is the only viable choice. Perhaps the PMO should prepare to receive OSCAL content first, then use that to better estimate the LOE required to prepare for each new format. Revising the ScriptAs one of the co-creators of OSCAL, I'll be the first to acknowledge it isn't perfect. Sometimes the team couldn't agree internally and we went one way or the other. Sometimes we did the best we could in the time we had and in the context of the use cases in front of us. There are definitely opportunities for improvement. That said, OSCAL is fundamentally viable. I'd strongly encourage the PMO to bring any specific concerns it has with OSCAL to the OSCAL community either via a GitHub issue in the NIST OSCAL repo, or by putting it directly on the OSCAL Foundation's radar with an issue in the Foundation repo. Casting CallThis does a great job of establishing the CSPs role. While the timeframes are for the CSPs to debate, the approach is great! 3PAO's need to be better written into the script given the SAP and SAR are their deliverables. For example, there is a grace period for existing authorizations, but new authorizations are required to be machine-readable sooner. If the PMO's intention is for new authorizations to only use a 3PAO with the ability to generate a machine-readable SAP and SAR that might be fine. Either be explicit about that expectation or consider setting separat expectations for CSP deliverables vs 3PAO deliverables. |
Beta Was this translation helpful? Give feedback.
-
|
I support this initiative to modernize authorization packages. A few observations: PMO Readiness: The success of this transition depends on FedRAMP's own tooling being ready. Publishing validation tools and acceptance criteria well before April 2026 will help CSPs test their outputs and build confidence in the process. Multiple Format Support: The provision for alternative formats (LMR-FRX-LAF) is flexible, but maintaining tooling parity across multiple formats will require significant effort. Consider clarifying the approval process and whether there's a practical limit on supported formats. 3PAO Obligations: The RFC focuses on CSP requirements but doesn't explicitly address 3PAO deliverables (SAP/SAR). Clarifying whether 3PAOs must meet the same machine-readable timelines would help avoid bottlenecks. Significant Change Definition: LMR-GEN-USC requires updates within one month of significant changes. Updated guidance on what qualifies as "significant" in the machine-readable context would help ensure consistent compliance. Overall, this is a welcome modernization effort. Clear communication and demonstrated PMO readiness will be key to successful adoption. |
Beta Was this translation helpful? Give feedback.
-
|
The Background section should reflect the actual OSCAL timeline and, more importantly, the real adoption story: strong stakeholder interest for years, but limited production adoption because the PMO could not demonstrate end to end intake readiness and tangible benefits CSPs could justify. As @brian-ruf is co-creator of OSCAL so he should know all about the history. Second, I strongly agree with your caution in “Grateful for an Edgy Reimagining.” The RFC is right to push machine-readable packages, but we should not minimize what it takes for the PMO to ingest, validate, correlate across artifacts, and operationalize this at scale. Your “57 channels” point is exactly the risk. If multiple formats are allowed during the transition, the PMO and agencies will pay for it in duplicated tooling, inconsistent validation, and review friction. Where I come out more strict is the format question. Even if structured formats are theoretically interchangeable, the government consumption problem is very real. Agencies will not want a world where CSP A submits OSCAL, CSP B submits some alternative, and each agency toolchain has to support both. That becomes an interoperability tax, and it will show up as annoyed agencies and slowed adoption. My recommendation:
That approach matches your caution about ingestion complexity while also preventing the fragmentation risk. It also creates the “demonstrate” moment CSPs need: a single target with predictable validation and predictable downstream utility. On your “Casting Call” point, I also agree the RFC should be clearer on expectations and sequencing across CSPs, 3PAOs, and the PMO. If the PMO cannot validate for FedRAMP completeness and correlate across artifacts, requiring OSCAL will not, by itself, produce better outcomes. The readiness work and the mandate have to land together. A compromise that still avoids chaos:
This is one of those areas where being less strict up front creates years of headache later. Consistency is the feature here. This feels like Congress passing a bill that’s so vague it invites loopholes and inconsistent interpretation. Same risk here. If FedRAMP leaves the file format open ended, providers will pick whatever is easiest for them, and agencies will be stuck dealing with the fallout. The requirement should be tightened with a single, specific mandated format. |
Beta Was this translation helpful? Give feedback.
-
|
5.5 months between: "FedRAMP to publish materials to support industry adoption of machine-readable authorization packages." and this: "Requirements for adopting machine-readable authorization packages take effect; failure to meet these requirements on the applicable timelines will result in public notification." seems extremely tight and will be extraordinarily difficult to implement while meeting the intent of the requirement. Even more so for those with FedRAMP High authorizations where the lack of FedRAMP High authorized SaaS tools in this space leads to additional friction with agencies, and complexity in meeting the actual requirements. More time should be provided between FedRAMP publishing materials and the adoption requirement takes effect (I understand there are exceptions based on annual assessment windows, etc.). |
Beta Was this translation helpful? Give feedback.
-
|
It would make a lot of sense to determine the the specific SSP parts that will be mandatory for submission under the new format. There was a short list of Appendix published a while back, but at this CSP needs a clear direction to be able to prioritize. There are already a few platforms/vendors offering this capability and, from a cost/planning perspective, CSPs need to know if this will be a requirement for the SSP as a whole or for parts of it. What about ConMon and POA&M submission? |
Beta Was this translation helpful? Give feedback.
-
|
I am a pretty strong supporter of mandating OSCAL as the canonical format, admittedly, even moreso after the release of this RFC. If CSP A submits in JSON, CSP B in YAML, and CSP C in OSCAL, every agency needs tooling that handles all three thereby all but mandating every GRC vendor builds translation layers for every combination. Some GRC tools are already executing here pretty well in regards to current word doc/excel format to OSCAL specifically. One format means one target for everyone instead of multiplying the problem across the CSP ecosystem. The RFC is clear on what CSPs must produce but I don’t think it touches on an important subject about agency readiness to consume. From my experience, many agencies don't yet have capacity to ingest machine readable data, which risks making this an additive burden. Is FedRAMP actively advising or incentivizing agencies to adopt GRC tools to get ahead of consumption readiness, or is the expectation that changing CSP requirements will naturally pull agencies to the table as more authorized GRCs emerge to facilitate? I would ask you to consider adding language that sets expectations for CSPs on how long dual-format maintenance is anticipated, or establishes milestones for revisiting LMR-GEN-HRV as agency consumption becomes more commonplace. |
Beta Was this translation helpful? Give feedback.
-
|
Hi - I'm Stephen Banghart, Technical Coordinator of the OSCAL Foundation. Below is the Foundation's response to RFC-0024: The OSCAL Foundation appreciates FedRAMP’s continued dedication to working with industry to modernize the FedRAMP Rev5 process. We firmly believe that moving to machine-readable packages will create an ecosystem that is both more secure and more resource efficient for vendors, customers, auditors, and regulators. We applaud the general direction that these RFCs are suggesting and are excited for a future with less spreadsheets and more automation. The Foundation membership agrees with the general industry sentiment that while flexibility within the package format can be helpful, multiple independent formats can ultimately lead to an increase in burden on agencies, vendors, and FedRAMP itself. Industry standards are born out of a shared desire to express information in an interoperable way, raising the bar for everyone and enabling use cases and workflows that wouldn’t otherwise be possible. We acknowledge that the OSCAL formats are not perfect – standards rarely are – but that as an industry-led community we are dedicated to evolving OSCAL into the technology that FedRAMP and the wider community needs to accomplish the requirements laid out in these RFCs. To that end, the OSCAL Foundation is committed to addressing industry and FedRAMP feedback to rapidly develop and update OSCAL in the coming months. We hear the FedRAMP PMO’s call to show – not tell, so expect to see relevant materials and OSCAL updates as quickly as is practical. We welcome all stakeholders to take a renewed look at OSCAL and to join us in shaping the common language for GRC interoperability — for FedRAMP and beyond. |
Beta Was this translation helpful? Give feedback.
-
|
Regarding: ""LMR-GEN-DGI Deterministically Generated Illustrations - Providers SHOULD use machine-generated deterministic telemetry to generate all necessary illustrations and diagrams, including at least the Authorization Boundary Diagram." As a Rev 5 CSP with FedRAMP Moderate complete BoE based currently on all existing FedRAMP rev 5 Templates the conversion of the appropriate sections of the SSP Front Matter template seems to be able to be handled by current and existing OSCAL GRC tooling vendors we are speaking with. I am thinking also that all relevant appendices as well as the Rev 5 Ballance Improvement Releases can be managed via an OSCAL output process. What is NOT available is "use machine-generated deterministic telemetry to generate all necessary illustrations and diagrams, including at least the Authorization Boundary Diagram". From the GRC vendor, their position is for FedRAMP PMO to push this recommendation OUT at least until the vendors can provide a sustainable and readable solution. I also agree that the timelines are very TIGHT between release of FedRAMP guidance and CSP full adoption. Understand completely the need to stress timelines, etc., just looks like a bit of a squeeze is all. |
Beta Was this translation helpful? Give feedback.
-
|
General Comment: Strategic Alignment with FedRAMP 20x Given that FedRAMP has publicly announced the 20x framework as the modernized path forward with a long-term goal of sunsetting Rev5, I respectfully question whether substantial modifications to Rev5 packages represent the most effective use of CSP resources. Requiring parallel process changes may create duplicative compliance burdens for Cloud Service Providers and divert attention from the security-focused improvements that 20x is designed to deliver. Assuming Implementation Proceeds: Specific Comments Comment 1: Request for Phased Implementation via Beta Program I strongly recommend that FedRAMP establish a formal beta program prior to full implementation of these requirements. Transitioning approximately ~500 CSO to machine-readable packages simultaneously introduces significant risk of widespread implementation issues. FedRAMP already employs beta programs successfully for Balance Improvement Releases (BiR) involving far more modest process changes. A change of this magnitude warrants, at minimum, a comparable piloting approach. A structured beta period would allow FedRAMP and industry stakeholders to identify and resolve technical challenges, tooling gaps, and process inefficiencies before requiring compliance across the entire CSO population. This approach would strengthen the final implementation and reduce disruption to both providers and agencies. Comment 2: LMR-FRX-LAF (List of Approved Formats) — Recommend Limiting to OSCAL I recommend that FedRAMP limit the approved machine-readable format to OSCAL exclusively. If multiple providers adopt different formats, even with as few as five CSPs per alternative format, GRC vendors will face significant challenges supporting these variations at scale. This fragmentation would increase implementation costs, create integration complexity, and potentially undermine the interoperability goals this RFC seeks to achieve. Given that government and industry have largely converged on OSCAL as the standard for security control automation, formalizing this consensus would provide clarity and reduce ecosystem fragmentation. While the RFC notes that "automatically converting between standardized formats has been a normal part of exchanging data between computer systems for 50+ years," this theoretical interoperability does not reflect the practical realities of the GRC tooling ecosystem. Comment 3: LMR-GEN-DGI (Deterministically Generated Illustrations) — Recommend Removal I recommend removing this requirement. For highly complex CSOs, deterministically generating accurate authorization boundary diagrams and related illustrations that then are human readable is not currently feasible with existing tooling. Based on a review of the market, there is not quality commercially available software that can reliably produce these artifacts for complex environments (e.g., large multi-service, multi-IaaS). While some tools can technically generate diagrams, the outputs for complex environments are often so large and dense that they become effectively unreadable, defeating the purpose of the illustration. Establishing this as a recommendation, with the consequence that non-compliant packages will carry a warning label indicating diagrams "may be unreliable," effectively penalizes CSPs for limitations in the tooling market rather than for any deficiency in their security posture or documentation practices. Comment 4: LMR-GEN-UDT (Use Deterministic Telemetry) — Recommend Removal I recommend removing this requirement. While the intent to reduce manually written narratives is understood, the practical outcome may be counterproductive. In the absence of mature tooling, providers may resort to AI-generated content to produce "deterministic telemetry," resulting in outputs that appear machine-generated but lack genuine evidentiary value. This concern is particularly relevant given the RFC's own definition explicitly excludes "probabilistic inferences, generative outputs, or predictive assessments such as those produced using generative transformer models." Without clear implementation guidance and available tooling, this requirement risks encouraging exactly the type of content it seeks to prohibit. Comment 5: LMR-FRX-LAM (List of Authorization Materials) — Recommend Phased Scope From a Cloud Service Provider perspective (excluding 3PAO-specific documents like the SAP and SAR), I recommend initially limiting the machine-readable format requirement to security controls, the Customer Responsibility Matrix (CRM), and the Control Implementation Summary (CIS) workbook. These are the materials that benefit most from machine-readable formatting. Security controls are central to continuous monitoring, while the CRM and CIS are essential for agency adoption, helping customers quickly understand inherited controls and shared responsibilities. All other authorization documents could remain in standard Word or PDF formats during the initial phase, allowing providers to focus their efforts where automation delivers the most value. |
Beta Was this translation helpful? Give feedback.
-
|
RFC-0024 is a significant step toward enabling agencies to automate authorization decisions, and I am very excited about it. To fully realize this goal, the RFC should be strengthened by also addressing semantic standardization alongside format requirements. Machine-readable packages become machine-comparable when they share common identifiers. For example, requiring Package URL (PURL) for component identification, which is supported by both major SBOM formats, would enable consistent inventory comparison across multiple cloud service offerings, supplemented by CPE for commercial products where PURL coverage is limited. |
Beta Was this translation helpful? Give feedback.
-
|
Gap #1 – PMO Readiness & Tooling Gap #2 – Implementation Timeline Gap #3 – 3PAO Deliverables Gap #4 – Deterministic Illustrations & Telemetry Gap #5 – Scope of Initial Machine-Readable Submissions Gap #6 – Risk of Duplication and GRC Vendor Burden Gap #7 – Canonical Format / Format Fragmentation |
Beta Was this translation helpful? Give feedback.
-
|
Curious to see how the following shake up as this goes into effect:
|
Beta Was this translation helpful? Give feedback.
-
Feedback: FedRAMP Rev5 Machine-Readable PackagesThank you for RFC-0024 modernizing FedRAMP authorization packages to machine-readable format. I support automation and standardization. However, RFC-0024 creates two critical compliance gaps (NARA records management and evidence authenticity) that must be resolved before September 30, 2026 implementation deadline. Summary: Implementation Readiness Assessment✅ Standardized machine-readable format selection (e.g., OSCAL or any adopted machine-readable format): Supports standardization, interoperability, and automation across federal agencies Note: RFC-0024 itself is format-agnostic (machine-readable). OSCAL is the most likely standard and OMB M-24-15 mandates OSCAL by Sept 30, 2026, but RFC-0024 should apply to OSCAL or any adopted machine-readable format. CRITICAL GAP #1: Evidence Authenticity & Records IntegrityThe ProblemRFC-0024 requires machine-readable packages as "evidence of federal authorization" (OSCAL is the most likely standard, but guidance should cover OSCAL or any adopted machine-readable format). But OSCAL doesn't include cryptographic protection against modification. This creates liability for federal records management. Scenario: Why This Matters: NARA Bulletin 2014-02 & Records Management MandateNARA Requirement for machine-readable records (NARA Bulletin 2014-02 and NARA General Records Schedule 3.1): REQUIREMENT: All machine-readable format assessment packages shall be cryptographically MECHANISM:
BENEFIT:
IMPLEMENTATION:
COST IMPACT:
"Information management records (IT policy, security documentation) Question 1: GRC tool stores format file. If an agency replaces a GRC tool in 2028, Question 2: Format evolves (e.g., NIST releases updated version in 2028). Question 3: After system decommissioned, how long must an agency keep format evidence?
Proposed Timeline Adjustment: CURRENT (Unrealistic): PROPOSED (Realistic): April 2026: Guidance + format converter tool published by NIST
2027 Full Transition: Rev5 legacy format deprecated; adopted format required for all new/renewal authorizations BENEFIT:
Risk: What if URL is broken?
Recommendation to FedRAMP PMO Artifact Link Validation Standard: Issue #5: Deterministic Telemetry ≠ Truthful TelemetryThe ProblemRFC-0024 requires "deterministically generated telemetry" (machine-generated, not AI). But deterministic ≠ truthful. Bad Example: Recommendation to FedRAMP PMO"Telemetry Completeness" Validation Standard: This prevents CSPs from technically meeting "deterministic" requirement while omitting inconvenient evidence. ClosingMy Position: RFC-0024's machine-readable format is operationally sound and needed. However, three critical gaps must be resolved:
Recommendation:
Submitted by: Trevor Lowing (private citizen, personal capacity) |
Beta Was this translation helpful? Give feedback.
-
|
Microsoft Azure Comment: Further, does the machine‑readable authorization data requirement apply only to the finalized SSP, SAR, SAP, and POA&M documents delivered after inspection and validation of source evidence, and not to the underlying source evidence itself, under RFC‑0024? Our understanding is that source evidence is out of scope for RFC‑0024. Evidence such as vulnerability scans, configuration screenshots, source‑code screenshots, and log data may be converted to machine‑readable formats when CSPs transition to FedRAMP 20X submissions, but CSPs are not expected to convert source evidence for Rev. 5 submissions. |
Beta Was this translation helpful? Give feedback.
-
|
Wiz applauds the move toward "machine-generated deterministic telemetry." We believe true security is based on real-time visibility, not static documents and we offer the following feedback to ensure this requirement modernizes FedRAMP to improve security visibility instead of digitizing bureaucratic compliance: 1. LMR-GEN-UDT Use Deterministic Telemetry 2. LMR-GEN-DGI Deterministically Generated Illustrations 3. Access to Near-Real Time Evidence: While we understand many platforms will take time to move away from point-in-time evidence, we suggest FedRAMP add a path to incentivize more real-time data access over static snapshots. 4. Commercial Standards 5. Timelines for Platforms |
Beta Was this translation helpful? Give feedback.
-
|
We support the long-term modernization goals behind machine-readable authorization packages and agree that structured security data can improve automation, consistency, and reuse across the FedRAMP ecosystem. However, we have significant concerns regarding standard definition clarity and implementation feasibility under the proposed timeline. The RFC strongly implies that OSCAL will be the required machine-readable format, yet it stops short of explicitly identifying OSCAL—or any specific format—as the definitive standard. This creates more than simple ambiguity. It places CSPs in the position of preparing for compliance without a clearly defined technical requirement. If OSCAL is the intended standard, it should be explicitly stated. If alternative formats are envisioned, those formats and their conformance criteria must be defined by the PMO. Establishing technical standards is appropriately the responsibility of the governing authority—not of individual CSPs attempting to interpret policy intent or anticipate future expectations. Without clear ownership of the standard, the burden of determining “what qualifies” appears to shift to industry, introducing unnecessary risk, inconsistency, and cost. Additionally, the current tooling ecosystem does not appear sufficiently mature to support mandatory adoption within the proposed timeframe. Available options (assumed as OSCAL) largely consist of open-source tools requiring substantial technical expertise to operationalize, or full GRC platform replacements that would require material cost and operational disruption. For many CSPs—particularly small and mid-sized providers—the compressed implementation window effectively creates a forced tooling transition before the ecosystem is ready. Modernization efforts of this scale warrant both definitive technical standards and a phased, ecosystem-aware transition plan. To support successful adoption, we respectfully recommend:
We support the direction of automation and welcome continued collaboration. Clear standards and a realistic transition framework will ensure this initiative strengthens—rather than strains—the FedRAMP marketplace. |
Beta Was this translation helpful? Give feedback.
-
|
LMR-GEN-SDS Rationale: As one of the few Cloud Service Providers that previously operated multiple Authorization Packages for distinct and specific services, we have moved towards consolidating our Authorization Packages - both within FedRAMP and DISA. This was driven by a number of factors:
Requiring distinct Service-based Data Separation would result in an administrative burden to generate dozens to hundreds of authorization data sets that are effectively 99.999% identical. An alternative requirement to achieve a similar intent, would be to require CSPs that do not implement LMR-GEN-SDS because of the above rationale, to clearly identify their CRM specific service based differentials within their Secure Configuration Guides. |
Beta Was this translation helpful? Give feedback.
-
|
LMR-GEN-SDS As companies mature their products, they typically move toward a more modular model—consolidating controls, frameworks, and architecture into reusable components that can be shared across product lines. In that context, requiring multiple separate packages to support products that are the same (or materially the same) runs counter to modernization goals. It increases duplication, creates unnecessary operational overhead, and makes it harder to maintain consistency over time. A more forward-looking approach would treat separate packages as an option, not a default mandate. In other words, this should be framed as a “may” depending on the company’s use case—such as when there are meaningful differences in risk profile, deployment model, regulatory scope, customer commitments, or lifecycle ownership. But applying a blanket requirement across the board can force organizations to fragment what they have intentionally designed to be shared and modular. The result is often a significant step backward: more complexity, slower delivery, higher maintenance costs, and increased potential for drift between implementations that should remain aligned. Shifting the language from “must” to “may” would preserve flexibility, support modern modular packaging practices, and still allow separation where it is justified by real technical, operational, or compliance needs. |
Beta Was this translation helpful? Give feedback.
-
|
From: Garoux, LLC, a government contractor building an agentic engineering platform. We are pursuing compliance with FedRAMP High and Impact Level 4/5 for our Phase II SBIR with the USAF and other projects. We are leveraging AI and custom tooling to help manage our compliance requirements. This comment also draws on the author's prior experience founding GovReady PBC (acquired by RegScale), serving as CDO at the FCC, and contributing to the OpenControl and OSCAL communities. What Three Generations of Machine-Readable Compliance Formats Teach UsOur first comment on FedRAMP RFC-0024 expands the background of machine-readable compliance data to include SCAP and OpenControl, the two formats that preceded and informed OSCAL. Three successive attempts over 20+ years to make compliance machine-readable have failed to gain the wide-scale adoption needed to keep pace with Agile, DevOps, and Infrastructure-as-Code. SCAP, OpenControl, and OSCAL each advanced the state of the art in significant ways, but each also introduced one or more "cold start" problems that raised the cost of getting started too high to justify broad adoption. In this comment we summarize the three major machine-readable compliance data projects relevant to FedRAMP and enumerate six cold start problems that machine-readable compliance data formats need to solve. Generation 1: SCAP (Security Content Automation Protocol)SCAP, developed beginning in the early-mid 2000s through a collaboration of NIST, MITRE, NSA, DHS, and FIRST, with SCAP 1.0 officially released in 2009, was the first serious attempt to bring machine-readable structure to security compliance data. SCAP enabled deterministic, scanner-based evaluation of operating systems, common software platforms, and configurations. Before SCAP, software configuration verification was largely manual or driven by informal scripts with no standardized baseline. SCAP-validated scanners gave organizations a machine-enforced, repeatable way to confirm that a system was actually configured to a known standard — not just documented as if it were. Red Hat, DISA STIGs, and other major platform vendors produced SCAP content that genuinely automated configuration verification. The SCAP Validation Program created a tested ecosystem of interoperable tools. But SCAP had two distinct cold start problems that together limited its reach: Cold start 1 — Authoring cost. Producing SCAP content for a piece of software required coordinating across multiple interlocking XML specifications (XCCDF, OVAL, OCIL, CPE, CCE, ARF), typically generating five separate XML documents knit together via XSLT transformations and specialized toolchains. Authoring SCAP involved panels of subject matter experts spending months agreeing on an authoritative configuration for each software package. The result: SCAP content existed for a narrow set of well-resourced, high-priority targets. Most software was never covered, and benchmarks lagged years behind technology development. Cold start 2 — Output gap. Even where SCAP worked well, its deterministic telemetry could not flow into authorization artifacts. SCAP produced scanner results. FedRAMP required control implementation statements. These two outputs lived in parallel, disconnected universes. An organization with full SCAP coverage still had to write their RMF SSP and its control implementation statments from scratch. This second cold start problem is critical context for RFC-0024's emphasis on deterministic telemetry (LMR-GEN-UDT): the vision of scanner outputs flowing into authorization packages is not new. It was SCAP's unrealized promise. The question is what will make the pipeline actually get built this time. Generation 2: OpenControlOpenControl, launched at All Things Open 2015 by Pivotal in collaboration with 18F, directly addressed SCAP's output gap. Its key insight was correct: map scanner results and human-authored control narratives to a common, component-based YAML format that could generate SSP documentation automatically. The format was simple enough that a developer could start with a single YAML file. The 18F cloud.gov team used it to manage their SSP through GitHub like a codebase. OpenControl's cultural contribution was perhaps more significant than its technical one. By framing compliance as code (e.g., YAML files in GitHub, CI pipelines, component reuse) it introduced compliance automation to a generation of engineers who had never engaged with it before. That community-building effect seeded the broader compliance-as-code movement that OSCAL and subsequent tools inherited. OpenControl did not just build a format; it built an audience. OpenControl's cold start problem was different from SCAP's but equally limiting: Cold start 3 — Ecosystem dependency. The model depended on an ecosystem of reusable component definitions that never materialized. It assumed that AWS, Red Hat, and other platform vendors would publish and maintain OpenControl components describing how their platforms satisfied controls — an npm-like registry of compliance building blocks. The analogy is instructive: npm succeeded because publishing a package was nearly zero-cost. Publishing an OpenControl component required understanding compliance frameworks that vendors had no internal incentive to prioritize. Fewer than 20 controls were ever specified in the AWS OpenControl repository. The vendor participation the model required never arrived at scale. Timing compounded the problem. OpenControl launched when Terraform and Infrastructure-as-Code were still early — before vendors routinely shipped machine-readable descriptions of their systems alongside their software. The cultural precedent for vendor-contributed compliance components did not yet exist. The most telling evidence: the cloud.gov compliance repository — the originating project's own SSP implementation — was dormant for four years by 2020 and archived. The project that created OpenControl could not sustain its own use of the format. Generation 3: OSCALOSCAL, developed by NIST beginning in 2016 in collaboration with FedRAMP and first released in 2019, has been the most rigorous and well-resourced attempt yet. The effort established something genuinely valuable that prior efforts lacked: a standardized semantic vocabulary for compliance-relevant objects. For the first time, entities like OSCAL also attracted real tool development among GRC vendors as a generation of engineers sought to translate their experience in Agile, DevOps, Infrastructure-as-Code, and Containers to automating compliance. But OSCAL's cold start problem was equally formidable, as RFC-0024 points out: in 2025, FedRAMP processed 100+ Rev5 authorizations without a single submission that used OSCAL. Three compounding factors explain this: Cold start 4 — All-or-nothing complexity. To produce a single valid OSCAL SSP, an author must first construct a complete, interlocking set of objects: a catalog, a profile resolving that catalog, component definitions, system metadata, and the SSP itself, each cross-referencing the others via UUIDs. There is no valid partial OSCAL. A single control implementation cannot be expressed without the full system context surrounding it. This made OSCAL an all-or-nothing proposition, and most chose nothing. Cold start 5 — Document-centric architecture. Rather than modeling compliance data as independent, queryable objects, OSCAL faithfully reproduced the RMF paper stack as machine-readable documents: catalog → profile → SSP → SAP → SAR → POA&M. The document remained the organizing unit. To participate in OSCAL, an organization had to understand and implement the entire RMF document lifecycle, not just the data they actually had to contribute. Cold start 6 — SCAP integration perpetually deferred. The OSCAL community recognized early that SCAP was the natural source of deterministic telemetry that should feed OSCAL's control implementations. This integration was discussed and deferred repeatedly. The pipeline from scanner observation to authorization package was never built into the standard, leaving OSCAL dependent on the same human-written narratives it was designed to replace. The Lesson for RFC-0024The progression from SCAP to OpenControl to OSCAL charts a course of each solving important problems while introducing different cold start issues inhibiting adoption. SCAP was too expert-dependent to author. OpenControl was too dependent on voluntary vendor ecosystem contributions. OSCAL was too complex to start without implementing everything at once. LMR-FRX-LAF. It is our position that RFC-0024 correctly prioritizes rapid adoption over further time investments in determining the correct data standard. Small kindling lights the fire faster than larger logs. The broader history of IT standards bears out the value of smaller, less-featured approaches that ignite participation. HTML was a severely limited subset of SGML. REST prevailed over the richer-featured SOAP. JSON displaced XML as the dominant format for web API data exchange. Simpler formats regularly achieve wider adoption because they are less costly to start using. A compliance data format that is easy to emit, easy to parse, and easy to validate is the best way for us to begin to automate compliance. And the same wisdom and talent that produced the richness of OSCAL over several years will similarly improve a simpler but widely adopted initial standard over several years. LMR-FRX-LAF. RFC-0024's openness to formats beyond OSCAL, including any format that five or more CSPs agree to use, reflects a healthy recognition that the field is not settled. That provision is sensible. LMR-FRX-AMR, LMR-GEN-ICR. Given the state of agentic AI coding in 2026, FedRAMP is correct to propose accepting multiple formats (provided each has industry support). Commenters advocating a canonical format or concerned with fragmentation need not worry. Today's agentic AI coding tools can generate reliable parsers and transformers for any well-structured format, dropping the cost of consuming a new format by orders of magnitude and effectively erasing previous time and cost concerns. The real interoperability bottleneck is no longer syntax; it is shared semantics and trustworthy identifiers, which these commenters are correct to advocate. Our second comment proposes specific characteristics any approved format should demonstrate drawn from these lessons. The goal is to give FedRAMP evaluative criteria that test whether a format can actually achieve adoption, not just whether it can represent the compliance data. |
Beta Was this translation helpful? Give feedback.
-
|
From: Garoux, LLC, a government contractor building an agentic engineering platform. We are pursuing compliance with FedRAMP High and Impact Level 4/5 for our Phase II SBIR with the USAF and other projects. We are leveraging AI and custom tooling to help manage our compliance requirements. This comment also draws on the author’s prior experience founding GovReady PBC (acquired by RegScale), serving as CDO at the FCC, and contributing to the OpenControl and OSCAL communities. Ten Principles for Machine-Readable Compliance DataOur previous comment traced how three iterations of machine-readable compliance data—SCAP, OpenControl, and OSCAL—pushed compliance automation forward but adoption was stymied by "cold start" problems. LMR-FRX-LAF, LMR-FRX-AMR. This comment presents a draft "10 Principles for Machine-Readable Compliance Data", inspired by the Twelve-Factor App methodology. These principles are a starting point and need community refinement. The goal is to avoid cold start friction and facilitate rapid adoption. It is a set of questions that any proposed format for RFC-0024 should be able to answer. The 10 Principles are format-neutral by design. OSCAL, with changes, could satisfy many of them. A future format may satisfy them differently. I. Atomic ObservationsEach compliance data element stands alone without requiring full system context to be valid or useful (LMR-GEN-SDS). Consider a control implementation statement, a scan result, a configuration state. Each should be independently meaningful. An assessor examining one observation should not need to parse an entire SSP to understand it. A scanner emitting one finding should not need to construct a complete system model to report it using just a system ID or component ID. Each blind man should be able to report what they observe about the elephant independently. OSCAL violated this by requiring every element to exist within a fully resolved document hierarchy. The result was that no one could contribute a single observation without first building the cathedral around it. Prevents: OSCAL's all-or-nothing complexity. II. Fast StartA participant should be able to contribute one valid piece of compliance data using one tool without implementing the full standard. This is the single most important principle. SCAP required a specialized XML toolchain. OpenControl required understanding compliance frameworks. OSCAL required constructing an interlocking set of cross-referenced documents. Each raised the floor of participation high enough that most potential contributors never stepped over it. The test is concrete: can a developer with no compliance background, given a control ID and a description of what their service does, produce a valid, submittable compliance observation in under an hour using tools they already have? If the answer is no, the format will not achieve broad adoption regardless of its other merits. Prevents: SCAP's authoring cost, OSCAL's implementation barrier. III. Incremental ParticipationA format should support partial adoption that grows over time, not require all-or-nothing commitment (LMR-GEN-SDS). Fast start gets a participant in the door. Incremental participation keeps them there. A CSP should be able to begin with machine-readable control implementations for ten controls, expand to fifty, and eventually cover the full baseline with each intermediate state being valid and useful to FedRAMP, not a broken partial document. This requires that the format's validation model accept incomplete coverage as a legitimate state, not an error. FedRAMP's review processes must similarly accommodate partial machine-readable submissions during a transition period. Prevents: OSCAL's interlocking document requirement, which made partial adoption structurally impossible. IV. Bidirectional CommunicationCompliance data exchange should be a conversation, not a bulk periodic drop-off (LMR-GEN-OAR). The current model is itself a cold start problem at the process level. CSP assembles a monolithic package, submits it, waits months for feedback. A machine-readable format enables something fundamentally different: CSPs push baseline compliance data continuously; agencies and FedRAMP pull additional detail on demand. This is progressive disclosure applied to authorization. High-level posture data flows automatically. Detailed implementation evidence is requested and provided as needed. Good APIs work this way. A public endpoint returns a summary; authenticated queries return detail. FHIR in healthcare distinguishes a patient summary from the full clinical record. The compliance analog is straightforward: system posture at the summary level, implementation specifics on authorized request. Prevents: the monolithic package bottleneck that makes every authorization cycle a from-scratch effort. V. Sensitivity TiersHigh-level descriptions must be separable from sensitive implementation details by design, not as an afterthought (LMR-GEN-HRV). Not all compliance data carries the same sensitivity. A statement that "the system encrypts data at rest using FIPS 140-validated modules" is publishable. The specific key management architecture, rotation schedules, and HSM configurations behind that statement are not. Current SSPs commingle these tiers in a single document, forcing the entire package to the highest classification level and limiting who can review it. A machine-readable format should enforce sensitivity boundaries structurally, making it possible to share tier-appropriate views of compliance data with different audiences without redaction. The push/pull model in Principle IV maps naturally to this: baseline data at lower sensitivity tiers flows freely; sensitive implementation details are disclosed only on authenticated, authorized request. Prevents: the access bottleneck where reviewers cannot see data they need because it is bundled with data they cannot have. VI. Trustworthy IdentifiersGlobally persistent identifiers and document-internal references must be clearly distinguished, and global identifiers must be stable across time and context. A compliant format should specify: which identifiers are globally persistent and where they resolve; which are document-scoped and how they relate to global identifiers; and what happens when an identifier is retired or superseded. OSCAL uses UUIDs extensively but does not make this distinction. There is no clear regime separating identifiers that are globally meaningful (this specific AWS service, this specific control) from those that are document-internal references (this cross-link within my SSP). The result: identifiers become opaque strings that tools cannot validate and humans cannot verify. Prevents: silent identifier collision across documents and organizations, a problem that compounds as the ecosystem scales. VII. Public Registry of Canonical IdentifiersWell-known assets should have authoritative, shared identifiers that any tool can reference without inventing its own. AWS EKS, Tenable Nessus, Kubernetes, RHEL, PostgreSQL appear in thousands of SSPs. Today, each CSP invents its own way to refer to them. A machine-readable ecosystem needs a shared namespace for common platforms, tools, and services: a DNS for compliance objects. FedRAMP should maintain or designate a public registry of canonical identifiers for components that recur across authorization boundaries. This registry becomes the coordination point that enables subsequent principles. Vendors can ship compliance data tagged with identifiers that tools already understand. Prevents: the fragmentation that made OpenControl's component reuse model impractical — every tool spoke a different dialect for the same objects. VIII. Vendor-Contributed ComponentsPlatform and tool vendors should be able to ship machine-readable compliance descriptions alongside their products (LMR-GEN-SDS). OpenControl identified this need over a decade ago. The cultural moment may have arrived. Terraform, Helm, and OCI registries have normalized vendors shipping machine-readable metadata alongside software. AWS, Azure, and GCP publish shared responsibility models. Scanner vendors already produce structured output. The format and registry infrastructure (Principle VII) should make it straightforward for a vendor to publish: "Here is what our product does, here are the controls it supports, here is the evidence it can produce — tagged with canonical identifiers, in a format any CSP can reference." This converts compliance documentation from a per-CSP cost to a shared, vendor-maintained asset. Prevents: OpenControl's ecosystem dependency problem — but this time with the IaC cultural precedent that makes vendor participation plausible. IX. Separation of Observation from AdjudicationDeterministic telemetry produces facts. Whether a fact represents an acceptable risk is a separate question that depends on context. These are different data types and should be treated as such (LMR-GEN-UDT, LMR-GEN-HRV). A scanner reports that a port is open, a patch is missing, a configuration value is set to X. Whether that finding represents a vulnerability, an accepted risk, or a non-issue depends on deployment context, operational environment, compensating controls, and agency risk tolerance. A firewall rule that is a finding in one deployment may be an accepted risk with documented compensating controls in another. The observation is the same; the risk adjudication differs. This separation also draws a clean and defensible line for AI's role in compliance. The observation layer stays deterministic and auditable. The adjudication layer is where context, judgment, and accumulated organizational knowledge live. That is exactly where AI assistance adds genuine value: tracking contextual changes, flagging when prior adjudications may no longer apply, and auditing the reasoning chain behind risk acceptance decisions without replacing human accountability. This is a far more defensible use of AI than "AI does compliance," and the format should make the boundary explicit. Prevents: non-reusable compliance data that embeds context-specific judgments into factual observations, and the temptation to use AI where deterministic methods suffice. X. Semantic Vocabulary ReuseBuild on OSCAL's named compliance objects rather than reinventing terminology (LMR-FRX-LAF). OSCAL's most durable contribution is not its document architecture but its vocabulary: This is not an endorsement of OSCAL's full specification. It is recognition that naming things well is genuinely hard, that OSCAL did this work, and that discarding it would force the community to re-fight terminology battles that are already settled. Prevents: unnecessary fragmentation from reinventing what OSCAL already got right. Applying the PrinciplesThese ten principles give FedRAMP a framework for evaluating any proposed format under RFC-0024's "five or more CSPs" provision. A candidate format that satisfies Principles I through III will achieve adoption. A format that additionally satisfies IV through VII will scale. A format that satisfies all ten will build an ecosystem. The principles also suggest a sequencing for RFC-0024's implementation timeline:
This phased approach aligns RFC-0024's ambition with the lessons of history: mandate adoption only after the infrastructure exists to support it. |
Beta Was this translation helpful? Give feedback.
-
The Cloud Service Providers - Advisory Board submits the following comments:General Comments:
Comments on Background Section:
Comments on Motivation Section:
Comment on Summary of Deadlines:
Comments on LMR-FRX-LAF List of Approved Formats:
Comments on LMR-FRX-AMR Accepting Machine Readable-Packages:
Comment on LMR-GEN-SDS Service-based Data Separation:
Comments on LMR-GEN-DGI Deterministically Generated Illustrations:
Comment on LMR-GEN-HRV Human-Readable Versions:
|
Beta Was this translation helpful? Give feedback.
-
|
The following public comment was received via email from Crowdstrike on Wed Mar 11: I am copy/pasting into the public comment record on their behalf as required by the FedRAMP public comment process. This is not an endorsement of the commenter or the content within the comment. It is very difficult to manage formatting from PDF submissions, so this was converted to Markdown and copy/pasted as-is - all text should be intact. REQUEST FOR COMMENT RESPONSE RFC-0024 FEDRAMP REV5 MACHINE-READABLE PACKAGES March 11, 2026 I.INTRODUCTION In response to FedRAMP’s request for comment on the FedRAMP Rev5 Machine Readable Packages (“draft standard”), CrowdStrike offers the following views. We approach these questions from the standpoint of a leading international, US headquartered, cloud-native cybersecurity provider that defends globally distributed enterprises from globally distributed threats. CrowdStrike offers insights informed by multiple practice areas: cyber threat intelligence; proactive hunting, incident response and managed security services; and an AI-powered software-as-a-service cybersecurity platform and marketplace. Accordingly, this perspective is informed by CrowdStrike’s role in protecting organizations from data breaches and a variety of other cyber threats. II. COMMENTS CrowdStrike recognizes the long-term vision of machine-readable authorization packages and appreciates FedRAMP's efforts to modernize the authorization process. The request for comment correctly identifies historical challenges with manual documentation processes and the theoretical benefits of structured, machine-readable data. We acknowledge that standardized, machine-readable formats could eventually improve efficiency and enable automated consumption of authorization data by agency tools. We welcome the opportunity to offer feedback that may be of value to FedRAMP as it considers the draft standard. A. Misalignment with Agency Demand and Capabilities The draft standard mandates significant investment in machine-readable package production without corresponding evidence of agency demand or readiness to consume such materials. Through direct engagement with our federal agency customers and their authorization officials who leverage our existing FedRAMP authorization for agency authorization decisions, CrowdStrike has identified minimal interest in receiving authorization packages in machine-readable formats. Agency authorization teams consistently request traditional human-readable documentation in The requirement for dual-format maintenance (LMR-GEN-HRV) acknowledges this reality but creates an unsustainable operational burden for currently authorized cloud service providers. We would be required to: • Invest substantial resources in developing or acquiring machine-readable authoring capabilities • Convert our existing comprehensive authorization package to machine-readable format • Maintain authoritative machine-readable source materials going forward • Generate human-readable versions for actual agency consumption upon request • Ensure consistency between both formats during monthly continuous monitoring updates and significant changes • Train our security, compliance, and technical staff on both documentation paradigms This dual-maintenance requirement effectively doubles the documentation burden for existing authorized providers while providing no demonstrated value to the agencies that are the ultimate consumers of this information. The draft standard notes that "no formal participants in the FedRAMP 20x Phase 1 pilot used [OSCAL] to structure the required machine-readable materials," which strongly suggests that even organizations participating in modernization pilots found insufficient value in machine-readable formats to justify adoption. Specifically: • Agency authorization officials lack tools, training, and processes to consume machine-readable packages, making them effectively unusable for authorization decisions. • The requirement creates a "build it and they will come" approach that places the entire burden and risk on cloud service providers without corresponding agency investment. • Existing authorized cloud service providers will invest significant resources to produce machine-readable packages that are immediately converted back to human-readable formats for actual use by agencies. • The LMR-GEN-USC requirement for updates after significant changes within one month creates a perpetual conversion burden as our platform evolves. The objective of highlighting this concern is to ensure FedRAMP requirements align with demonstrated agency needs and capabilities rather than theoretical future states. • Evaluate and select from approved machine-readable formats (with OSCAL being the only currently listed option despite zero adoption in 100+ Rev5 authorizations in 2025 and zero adoption among FedRAMP 20x Phase 1 pilot participants, all of whom used alternative machine-readable formats) • Identify, evaluate, and procure or develop authoring tools and validation capabilities • Assess the current vendor and consultant ecosystem for expertise • Convert our existing multi-hundred page System Security Plan with detailed control implementation statements • Convert all standard Rev5 appendices referenced in LMR-FRX-LAM • Restructure our authorization package to separate service-based data per LMR GEN-SDS in alignment with procurement patterns • Develop and implement processes for maintaining machine-readable packages during continuous monitoring • Train our security, compliance, and technical staff on new tools and processes • Establish quality assurance and validation procedures • Integrate machine-readable package generation into our change management workflows for LMR-GEN-USC compliance This timeline is particularly problematic for existing authorized providers given that: • The RFC acknowledges "no formal participants in the FedRAMP 20x Phase 1 pilot used [OSCAL]," indicating that even motivated early adopters participating in a modernization pilot specifically designed to test new approaches unanimously chose alternative machine-readable formats over OSCAL • Commercial tooling, consulting services, and 3PAO expertise are lacking due to low market demand • The requirement applies to our complete existing authorization package, not just incremental updates The complete absence of a phased approach, pilot program, or beta period represents a significant departure from standard change management practices for requirements of this magnitude. Organizations implementing enterprise-wide documentation transformations typically employ multi-year phased approaches with extensive piloting, particularly when converting existing comprehensive documentation rather than creating new materials. The 20x pilot experience demonstrates that machine readable packages are achievable, but also reveals that providers gravitate toward formats other than OSCAL when given flexibility—a finding that should inform Rev5 requirements. Specifically: • The timeline should be extended to a minimum of 12-18 months from publication of final requirements to allow existing authorized providers proper planning, tooling acquisition, and conversion of existing materials. • FedRAMP should establish a formal pilot program with volunteer cloud service providers who have existing authorizations to identify implementation challenges, validate tooling approaches, and develop best practices before mandatory adoption. • A phased approach should prioritize specific document types (e.g., SSP first, then appendices, then continuous monitoring artifacts) rather than requiring simultaneous conversion of all materials in a single annual assessment. • The marketplace prioritization outlined in LMR-FRX-GPM should be enhanced with additional tangible benefits to encourage voluntary early adoption. The objective of these recommendations is to ensure successful adoption through realistic timelines that account for the evolving tooling ecosystem, leverage lessons learned from 20x implementations, and recognize the complexity of converting comprehensive existing authorization packages. C. Technical Feasibility of Diagram and Illustration Automation The draft standard's recommendation for deterministically generated illustrations and diagrams (LMR-GEN-DGI), while conceptually appealing, presents significant technical challenges that are not adequately addressed. The requirement that providers "use machine-generated deterministic telemetry to generate all necessary illustrations and diagrams, including at least the Authorization Boundary Diagram" assumes capabilities that do not currently exist in commercial tooling and may not be technically feasible for complex cloud architectures maintained by existing authorized • Hundreds of interconnected services and components across our platform • Complex data flows across multiple security boundaries • Shared responsibility models between provider and customer • Conditional architectures that vary based on customer configuration choices • Logical and physical network segmentation across multiple regions • Integration points with customer environments and third-party services • Inherited controls from infrastructure providers Automatically generating accurate, comprehensible diagrams from system telemetry for architectures of this complexity requires sophisticated graph layout algorithms, semantic understanding of component relationships, and intelligent abstraction to produce diagrams that are actually useful for human review by authorization officials. The RFC provides no guidance on: • What constitutes acceptable "machine-generated deterministic telemetry" for diagram generation • How to handle architectural complexity that exceeds the practical limits of automated diagram generation • Whether semi-automated approaches (human-guided automated generation) satisfy the requirement • What validation or accuracy standards apply to automatically generated diagrams • How to represent conditional or customer-configurable architectures in automated diagrams The penalty for non-compliance - "a warning that illustrations and diagrams are artisanal artifacts and may be unreliable" - is particularly problematic for existing authorized providers. This warning implies that our carefully crafted diagrams, reviewed by our architects, security engineers, and 3PAO assessors, are inherently unreliable. In reality, our manually created diagrams may be significantly more accurate and useful than automatically generated diagrams that lack appropriate context, abstraction, and the semantic understanding necessary to communicate complex security architectures to authorization officials. Specifically: • Standards for diagram accuracy, completeness, and usability should be established regardless of generation method, with validation processes that ensure automated diagrams meet the same quality standards as manually created diagrams. • The marketplace warning should be reconsidered, as it creates a false equivalence between generation method and reliability, potentially misleading agencies about the quality of authorization materials from existing authorized providers. The objective of these recommendations is to ensure diagram requirements are technically achievable while maintaining the accuracy and utility that authorization officials require for security assessments of existing authorized cloud services. III. CONCLUSION We believe machine-readable authorization packages represent a valuable long-term vision for FedRAMP modernization. However, successful implementation requires alignment between provider capabilities, agency readiness, and realistic timelines that account for the significant conversion burden on existing authorized providers. As the standard moves forward and evolves, we recommend: Phased Implementation Approach: Establish a multi-year roadmap with clear phases, beginning with voluntary pilot programs involving existing authorized providers, progressing to specific document types, and ultimately achieving comprehensive machine-readable packages as tooling matures and agencies develop consumption capabilities. Agency Readiness Requirements: Coordinate machine-readable package requirements with corresponding agency investments in tools, training, and processes to consume such materials, ensuring existing authorized providers are not investing in capabilities that agencies cannot utilize. Extended Timeline with Pilot Period: Extend implementation timelines to 12 - 18 months from publication of final requirements and establish formal pilot programs with volunteer providers who have existing authorizations to validate approaches, identify challenges, and develop best practices before mandatory adoption. Realistic Technical Requirements: Ensure requirements for automated diagram generation and deterministic telemetry are technically feasible with available or near- Finally, we recommend FedRAMP consider whether the significant investment required for existing authorized providers to convert comprehensive Rev5 authorization packages to machine-readable formats might be better directed toward accelerating agency adoption of FedRAMP 20x, which was designed from first principles to support machine-readable authorization data. Requiring substantial investment in retrofitting existing Rev5 authorization packages may delay the transition to the more comprehensive modernization that 20x represents, while creating a significant burden on providers who have invested years in maintaining high-quality authorization materials under the current framework. |
Beta Was this translation helpful? Give feedback.
-
|
The following public comment was received via email from Salesforce on Wed Mar 11: I am copy/pasting into the public comment record on their behalf as required by the FedRAMP public comment process. This is not an endorsement of the commenter or the content within the comment. This was converted from a table to markdown, apologies for any formatting issues. Text should all be intact.
|
Beta Was this translation helpful? Give feedback.
-
|
Thanks as always for the opportunity to comment and be a part of shaping FedRAMP's future. 1) LMR-FRX-LAF - List of Approved Formats 2) Authorization Package - 3PAO Deliverables 3) LMR-GEN-SDS - Service-based Data Separation For integrated platforms, particularly SaaS, services often share the vast majority of their control inheritance (e.g., common authentication, logging, and physical security layers). Rather than physically these into distinct artifacts, a properly annotated, consolidated package is a more effective approach. By utilizing OSCAL metadata to tag controls by component or service, consumers can leverage the advanced search and filtering capabilities of modern GRC tools to "view" only the relevant data they need, without forcing the CSP to separately "author" it. We recommend revising this clause to define service-based data separation as an optional ('MAY') capability. Separation should be reserved for CSPs whose discrete services exhibit material differences in architecture, risk profile, or regulatory scope. This adjustment supports the broader FedRAMP modernization goal of reducing unnecessary documentation overhead while maintaining effective security reviews. 4) LMR-GEN-DGI - Deterministically Generated Illustrations Second, for hyperscale SaaS platforms operating dynamic, cloud-native microservices architectures, generating a raw, deterministic topological map yields an artifact largely devoid of communicative value. AOs require deliberate, human-guided abstraction to properly comprehend logical boundaries, zero-trust enforcement points, IAM flows, and shared responsibility demarcations. Penalizing CSPs for producing legible, abstracted diagrams will perversely incentivize the submission of technically accurate but practically useless machine-generated outputs. We request the removal of punitive Marketplace signaling associated with non-deterministic authorization boundary diagrams. The standard should focus on the accuracy and verifiability of the diagram rather than the specific mechanism used to generate it. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for allowing public comment on this. Below are my personal thoughts: LMR-FRX-LAM:
LMR-FRX-LAF:
LMR-FRX-AMR:
LMR-GEN-USC:
LMR-GEN-DGI Deterministically Generated Illustrations:
LMR-GEN-UDT Use Deterministic Telemetry:
|
Beta Was this translation helpful? Give feedback.
-
|
The motivation for moving to a machine readable authorization package is very sound. LMR-FRX-LAF List of Approved Formats LMR-GEN-OAR Ongoing Authorization Requirements LMR-GEN-DGI Deterministically Generated Illustrations |
Beta Was this translation helpful? Give feedback.
-
|
The Alliance for Digital Innovation (ADI) appreciates the opportunity to comment on RFC-0024, FedRAMP Rev. 5 Machine-Readable Packages. We strongly support the transition to Rev5 machine-readable authorization packages using the National Institute of Standards and Technology’s (NIST) Open Security Controls Assessment Language (OSCAL) and view this as a meaningful step toward modernizing FedRAMP. The current assessment model remains heavily dependent on manual workflows, static documents, and longitudinal evidence collection across hundreds of controls under Rev. 5. Structured, machine-readable data has the potential to reduce manual paperwork, streamline annual assessments, enable automation, and accelerate reuse across agencies. If implemented effectively, this shift can improve assessment quality while shortening authorization timelines and reducing friction for cloud service providers (CSPs), agencies, and third-party assessment organizations (3PAOs) alike. As this effort progresses, we encourage FedRAMP to ensure the transition delivers measurable efficiencies and cost savings in practice, particularly by avoiding prolonged parallel requirements for fully duplicative paper-based and machine-readable submissions, and by ensuring that assessment effort and associated costs decrease over time as automation and deterministic telemetry mature. At the same time, successful implementation will require careful management of stakeholder impacts. CSPs, 3PAOs, and agencies will need to invest in tooling, telemetry integration, and training to operationalize machine-readable workflows, and these transition costs should be acknowledged and planned for. While this transition may disrupt legacy processes, it also creates an opportunity for forward-leaning assessors to differentiate themselves through faster, more consistent, and automation-enabled assessments. In addition, modernization must extend beyond the requirement that FedRAMP “MUST accept and review” machine-readable packages. Agency readiness will be a critical determinant of success. Some customer agencies may be unable to accept machine readable formats at this point. Close coordination with the Office of Management and Budget (including the Resource Management Organization) and with customer agencies to assess their readiness to ingest and operationalize machine-readable authorization packages is critical. To ensure a credible and effective transition, FedRAMP should establish the necessary operational infrastructure to support machine-readable workflows, including: (1) publishing API-based submission endpoints, (2) providing structured, machine-readable feedback, and (3) establishing SLAs that distinguish review timelines for machine-readable versus non-machine-readable submissions. If additional resources are required to support agency adoption, FedRAMP should proactively identify those needs and work with agencies and OMB to outline a clear path to address them. With clear guidance, phased implementation support, cross-agency coordination, and a focus on measurable outcomes, RFC-0024 can save significant time and resources while strengthening agencies’ security posture through real continuous monitoring. Thank you for your consideration on this matter. |
Beta Was this translation helpful? Give feedback.
-
|
The following public comment was received via email from Coalfire on Wed Mar 11: I am copy/pasting into the public comment record on their behalf as required by the FedRAMP public comment process. This is not an endorsement of the commenter or the content within the comment. Motivation: Deadline LRM-GEN-SDS LMR-GEN-ICR and OAR LRM-GEN-USC LRM-GEN-DGI and UDT LMR-GEN-HRV |
Beta Was this translation helpful? Give feedback.
-
|
The following public comment was received via email from Darktrace on Wed Mar 11: I am copy/pasting into the public comment record on their behalf as required by the FedRAMP public comment process. This is not an endorsement of the commenter or the content within the comment. Note, this comment used colors to differentiate original text vs response which will not come through in a copy/paste. LMR-FRX-LAM List of Authorization Materials FedRAMP MUST publish and maintain a list of required information for a Rev5 authorization package, including security controls from the NIST SP 800-53 and FedRAMP-specific assignments and guidance for these controls; once published, FedRAMP will no longer publish or maintain word-processor based templates for these materials. Note: This will include all standard Rev5 Appendices. Effective Date: 2PM ET on April 15, 2026 (tentative) Feedback: Although these templates are labor intensive, it is important for the government to provide examples and context so that existing CSPs can develop materials and keep in line with latest FedRAMP requirements and requisite information. Just the controls and basic guidance may not be sufficient for risk and compliance people to develop appropriate materials. Typically technology folks deal in requirements, not just controls and suggested coverage. This continues to be an issue for many CSP organizations. Perhaps the date can be extend to April 2027, where the government offers current materials as baseline /past experience references, and then publishes a compendium of changes/advances. Overall, the templates have served as examples. For the new FedRAMP 20x standards and utilizing other methods to measure, having examples of types of measurement and reporting mechanisms will accelerate adoption vs. leave CSPs with their GRC and Dev teams mis-aligned. LMR-FRX-LAF List of Approved Formats FedRAMP MUST publish and maintain a list of approved standardized formats for the submission of machine-readable authorization data. Any approved standardized format that has not been adopted by any cloud service providers within 1 year of its inclusion will be removed from the list. This list will include: Feedback: CSPs/Providers should have multiple formats to choose from as it allows their business to better innovate, support multiple frameworks, and better align their SecDevOps functions, tools, and compliance to suit their business. LMR-FRX-GPM General Prioritization of Machine-Readable Packages FedRAMP MUST publicly identify FedRAMP Certified cloud service offerings with machine-readable authorization packages, prioritize their listing in search results, and coordinate additional support for agency adoption of such; this MUST include additional identification, prioritization, and support for services that leverage machine-generated deterministic telemetry or have adopted Rev5 Balance Improvement Releases. Note: This demonstrates FedRAMP’s strong support for prioritizing and supporting cloud service providers that adopt changing requirements that benefit the federal government and ensures cloud service providers who prioritize security-based improvements to meet changing FedRAMP requirements will benefit from doing so when agencies consider their use. Effective Date: 2PM ET on April 15, 2026 (tentative) Feedback: Although rapid adoption of new requirements seems like an initial win, the government should also recognize the millions of dollars of investment that industry has placed in developing and maintaining these systems, as well is training and development of people and processes, and not just technology. Like the government, organizations and their people shouldn’t be forced to re-invest rapidly just to maintain their current systems with new tools and formats forced upon them. This is likely to disrupt other improvement priorities. Refactoring the same compliance shouldn’t override delivering innovation and system improvements. LMR-GEN-OAR Ongoing Authorization Requirements Providers MUST submit a full authorization package in an approved machine-readable format for each annual assessment to maintain FedRAMP Certification. Notes: This requirement applies to all cloud service providers with a current Rev5 FedRAMP Certification. Feedback: Putting a full package into machine readable format, although inevitable, still has a few unanswered questions: How will provenance be maintained for the package contents once moved to some third party machine/software? Providers MUST update their machine-readable authorization package after completing significant changes to accurately reflect the current state of their cloud service by 2PM ET at the end of the following month. Notes: This addresses a significant problem with legacy authorization packages when cloud service providers make significant changes such as adding or integrating additional services in a way that creates confusion for agencies reviewing their authorization package for potential use because these changes are not documented in the primary materials. Corrective Actions: This requirement will be enforced over a rolling 1 year period as follows: First action: Public notification and a 3 month grace period to address the requirement; failure or a new occurrence that is not addressed by the end of the grace period will lead to Strike 2. Feedback: Where will these updated changes (essentially the entire updated package) be stored – that keeps them secure? Today, government systems only house MODERATE or below packages. Is the requirement to have available upon request or submit the updated package to FedRAMP? It is unclear How/where will these updates will be secured while stored. LMR-GEN-DGI Deterministically Generated Illustrations Providers SHOULD use machine-generated deterministic telemetry to generate all necessary illustrations and diagrams, including at least the Authorization Boundary Diagram. Effective Date: 2PM ET on September 30, 2026 Note: The marketplace listing and authorization package for cloud services that do not follow this recommendation will include a warning that illustrations and diagrams are artisanal artifacts and may be unreliable. LMR-GEN-UDT Use Deterministic Telemetry Providers SHOULD use machine-generated deterministic telemetry in place of manually or probabilistically generated narratives in authorization data where feasible. Note: FedRAMP is deliberately leaving this vague and up to the cloud service provider; use of machine-generated deterministic telemetry will be a factor in both initial and ongoing FedRAMP Certification. Effective Date: 2PM ET on September 30, 2026 Feedback: Although this should be included, for services that utilize multiple IaaS/PaaS services, machine-generated telemetry may not clearly depict the service and boundaries. This should be optional, with better descriptive and data along with vendor generated diagrams where appropriate to support clarity and transparency. LMR-GEN-HRV Human-Readable Versions Providers MUST also make human-readable versions of all materials in the machine-readable authorization package available in standard formats, such as Word and Excel, if requested by FedRAMP or an agency. Notes: Feedback: Although rapid adoption of new requirements seems like an initial win, the government should also recognize the millions of dollars of investment that industry has placed in developing and maintaining these systems, as well is training and development of people and processes, and not just technology. Like the government, organizations and their people shouldn’t be forced to re-invest rapidly just to maintain their current systems with new tools and formats forced upon them. This is likely to disrupt other improvement priorities. Refactoring the same compliance shouldn’t override delivering innovation and system improvements. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
RFC-0024 FedRAMP Rev5 Machine-Readable Packages
❓ Please note that FedRAMP will not answer questions in this thread as it is reserved for public comment. If you would like to ask a question or generally discuss this RFC informally, please use the General discussion / Q&A for RFC-0019 through RFC-0024 thread. Thank you!
Status: Closed
Start Date: January 13, 2026
Closing Date: March 11, 2026
Summary
This RFC proposes modifications to the FedRAMP Rev5 process for current and future Rev5-based assessments and authorizations to ensure that cloud service providers produce machine-readable authorization data that can be ingested by agency tools. This RFC applies only to the FedRAMP Rev5 process and does not apply to FedRAMP 20x.
These modifications include explicit requirements for the production of machine-readable authorization data by FedRAMP Rev5 providers, related timelines, and corrective actions for those who fail to meet these requirements. This RFC also proposes requirements for the structured nature of this required machine-readable authorization data to ensure interoperability between diverse government and industry systems, including the use of OSCAL (Open Security Controls Assessment Language).
Finally, this RFC proposes requirements and timelines for Rev5-based assessments and authorizations to transition, where feasible, from human-written narratives (or machine-generated probabilistic text designed to mimic human-written narratives) to machine-generated deterministic telemetry.
This RFC is aligned with other concurrent RFCs that have additional detail on specific topics but have been published separately to encourage topic-specific comments:
Background
The history of federal information system security plans charts a fascinating course through the past 50+ years. Laws and policies throughout the 1970s and 1980s established recommendations for maintaining the security of federal information systems that changed rapidly in response to technology and shifting global politics. Momentum for system security plans reached an initial peak when the Office of Management and Budget (OMB) transmitted Appendix III to OMB Circular No. A-130 in 1985. This appendix, titled “Security of Federal Automated Information Systems”, established a “minimum set of controls to be included in Federal automated information systems security programs” and required agency officials to maintain information security programs.
The Computer Security Act of 1987 soon followed, creating the first statutory requirement for all agencies to “establish a plan for the security and privacy of each Federal computer system.” In 1988, OMB Bulletin No. 88-16 responded to the Computer Security Act by directing agencies to prepare security plans for each identified system and submit them to the National Bureau of Standards (which would soon be renamed to NIST) for advice and comment.
NIST completed a review of the resulting government-wide computer security and privacy plans in 1990 and published a report (NIST IR 4409) that found, among many other issues, a lack of consistency and standardization in these plans across government agencies. A key result from this report was that NIST would develop guidance on computer security planning. Notably, this report included the following warning in its conclusion:
“It is unclear whether the plans submitted to NIST and NSA under OMB Bulletin 88-16 were true computer security planning instruments, or only artifacts produced to satisfy an external submission requirement.”
Eventually, in 1998, NIST first published the NIST SP 800-18 Guide for Developing Security Plans for Information Technology Systems. This guide recommended a format for system security plans that improved on the original format from OMB Bulletin No. 88-16 and provided a significant amount of supporting guidance on the creation of system security plans.
In 2016, NIST collaborated with FedRAMP to begin the development of OSCAL, the Open Security Controls Assessment Language, to provide a standardized machine-based representation of these artifacts to encourage a transition from manual human-written documents to materials that included machine-generated deterministic telemetry. Industry is generally quick to adopt capabilities that reduce cost, complexity, and risk but has shown little interest in leveraging OSCAL at scale in spite of its considerable promise. In 2025, FedRAMP processed 100+ Rev5 authorizations without a single submission that used OSCAL; no formal participants in the FedRAMP 20x Phase 1 pilot used it to structure the required machine-readable materials.
Today, at the beginning of 2026, the FedRAMP system security plans used by providers and agencies for the Rev5 process are at best a minor incremental improvement over the template provided in the original SP 800-18. The expectation remains that a human will manually write narrative responses to a series of questions, attaching supporting documents and materials, and justify an implementation for each control in text that is not directly tied to the system itself… and that these materials will all be reviewed manually by a different human.
Definitions
Italicized terms are explained in the Rev5 Balance Improvement Releases Definitions, with the most commonly used terms in this document provided below for quick reference:
Machine-Readable: Has the meaning from 44 U.S. Code § 3502 (18) which is "the term "machine-readable", when used with respect to data, means data in a format that can be easily processed by a computer without human intervention while ensuring no semantic meaning is lost."
Authorization Data: The collective information required by FedRAMP for initial and ongoing assessment and authorization of a cloud service offering, including the authorization package.
Authorization Package: Has meaning from 44 USC § 3607 (b)(8) which is "the essential information that can be used by an agency to determine whether to authorize the operation of an information system or the use of a designated set of common controls for all cloud computing products and services authorized by FedRAMP."
The following terms will be newly explained as follows:
Machine-Generated: Automatically produced by a computer process, application, or other mechanism without the intervention or manipulation of a human during production.
Deterministic Telemetry: Verifiable data collected directly from an authoritative source that represents a factual and reproducible observation of the attributes of a system such as the system’s state, configuration, or behavior.
Note: Probabilistic inferences, generative outputs, or predictive assessments such as those produced using generative transformer models (commonly referred to as “Generative AI”) do not constitute a factual record of the system state and must not be used to generate deterministic telemetry.
FedRAMP Certification: This is a draft label for FedRAMP Rev5 authorization discussed separately in RFC-0021: Updating the FedRAMP Marketplace.
The capitalized key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this documentation are to be interpreted as described in IETF RFC 2119.
Motivation
The FedRAMP Authorization Act and OMB Memorandum M-24-15 directed FedRAMP to create a modernized assessment, authorization, and continuous monitoring process for cloud services used by agencies; FedRAMP 20x is being developed from first principles to ensure this process will support native machine-generated deterministic telemetry that is persistently distributed via machine-readable artifacts. FedRAMP is carefully integrating select improvements that have been piloted and tested with 20x into the Rev5 process to balance modernization with stability.
This set of requirements and recommendations, unique to the Rev5 process, are designed to ensure that legacy Rev5 FedRAMP Certified cloud service offerings can produce modern validated authorization data and that agencies can automatically consume this data to make both initial and ongoing authorization decisions.
First, the groundwork will be laid by aggressively transitioning FedRAMP authorization data from traditional word-processor and spreadsheet based submission materials to machine-readable structured information. This will address the “chicken or the egg problem” that has hindered wide-scale adoption of machine-readable structured materials by establishing formal requirements and deadlines for industry adoption of this capability; without industry adoption, there are no machine-readable structured materials for agencies to consume, so the change must begin outside the government even if the machine-readable structured materials are only used to generate traditional documents for agencies at first.
Second, as industry adopts, improves, and innovates with the production of machine-readable structured authorization data, FedRAMP will encourage and reward the integration of machine-generated deterministic telemetry within these materials. Providers following the Rev5 process that incorporate Balance Improvement Releases and build in machine-generated deterministic telemetry will be ranked higher in the Marketplace, receive additional support from FedRAMP such as centralized continuous monitoring, and be more likely to meet updated agency procurement requirements.
These changes will allow the Rev5 process to continue to exist and compete against the new 20x process until the ecosystem is ready to fully transition to 20x.
Summary of Deadlines
This summary is provided only for convenience and is not authoritative; please review the full proposed requirements and recommendations below for authoritative effective dates and specific applicability. In the event of a mismatch between deadlines in this summary and effective dates in the requirements and recommendations below, use the effective date for the specific requirement.
Proposed Requirements for Rev5 Machine-Readable Packages
After public comment, the final form of these requirements and recommendations will apply to all cloud services obtaining or maintaining a Rev5 FedRAMP Certification based on the final Effective Date(s).
Unless otherwise specified in a specific requirement, the default corrective actions for cloud service providers that fail to address these requirements will be as follows:
Initial grace period until 2PM ET on September 30, 2027: Public notification that the provider has failed to meet this requirement and is pending revocation of FedRAMP Certification on September 30, 2027, unless this requirement is addressed.
After 2PM ET on September 30, 2027: Revocation of FedRAMP Certification (including revocation of any legacy exceptions based on FedRAMP Certification status), requiring a completely new initial authorization that meets all FedRAMP requirements for new assessments and authorizations at that time.
LMR-FRX-LAM List of Authorization Materials
LMR-FRX-LAF List of Approved Formats
LMR-FRX-AMR Accepting Machine Readable-Packages
LMR-FRX-PRM Prioritizing Review of Machine-Readable Submissions
LMR-FRX-GPM General Prioritization of Machine-Readable Packages
LMR-GEN-ICR Initial Certification Requirements
LMR-GEN-OAR Ongoing Authorization Requirements
LMR-GEN-SDS Service-based Data Separation
LMR-GEN-USC Updates after Significant Changes
LMR-GEN-DGI Deterministically Generated Illustrations
LMR-GEN-UDT Use Deterministic Telemetry
LMR-GEN-HRV Human-Readable Versions
Beta Was this translation helpful? Give feedback.
All reactions