diff --git a/.kiro/README.md b/.kiro/README.md
new file mode 100644
index 00000000000..fcd99ea2ebd
--- /dev/null
+++ b/.kiro/README.md
@@ -0,0 +1,42 @@
+# CodeLoom-4-Bedrock
+
+CodeLoom-4-Bedrock is configured to develop AWS SDK code examples across 12+ programming languages for AWS Documentation.
+
+## How It Works
+
+This setup uses two key components to ensure consistent, high-quality code:
+
+### MCP Servers
+- **Amazon Bedrock Knowledge Base**: Access to curated coding standards and premium implementation patterns
+- **AWS Knowledge Server**: Direct access to AWS documentation and service information
+
+### Steering Rules
+Automated guidance that enforces:
+- **Knowledge Base First**: Always consult knowledge bases before writing code
+- **Language-Specific Patterns**: Each language has specific naming, structure, and testing requirements
+
+## Quick Start
+
+1. **Check MCP Status**: Ensure servers are running in Kiro MCP Server view
+2. **Choose Language**: Each has specific patterns (see steering docs)
+3. **Research Service**: The tool will automatically query knowledge bases and AWS docs
+4. **Follow the Workflow**: KB consultation → Implementation → Testing → Documentation
+
+## Key Requirements
+
+- **Knowledge Base Consultation**: Mandatory before any code creation
+- **All Tests Must Pass**: Zero failures before work is complete
+- **Hello Scenarios First**: Simple examples before complex ones
+- **Real AWS Integration**: Tests use actual AWS services with proper cleanup
+
+## Configuration Files
+
+- **MCP Setup**: `.kiro/settings/mcp.json` (requires `cex-ai-kb-access` AWS profile)
+- **Detailed Rules**: `.kiro/steering/*.md` files contain comprehensive guidelines
+- **Language Specifics**: See `tech.md`, `python-tech.md`, `java-tech.md`, etc.
+
+## Need Help?
+
+- Review steering files in `.kiro/steering/` for detailed guidance
+- Check MCP server status if knowledge base queries fail
+- All language-specific patterns are documented in the steering rules
\ No newline at end of file
diff --git a/.kiro/settings/mcp.json b/.kiro/settings/mcp.json
new file mode 100644
index 00000000000..f45c14c3835
--- /dev/null
+++ b/.kiro/settings/mcp.json
@@ -0,0 +1,28 @@
+{
+ "mcpServers": {
+ "awslabs.bedrock-kb-retrieval-mcp-server": {
+ "command": "uvx",
+ "args": [
+ "awslabs.bedrock-kb-retrieval-mcp-server@latest"
+ ],
+ "env": {
+ "AWS_PROFILE": "cex-ai-kb-access",
+ "AWS_REGION": "us-west-2",
+ "FASTMCP_LOG_LEVEL": "ERROR",
+ "BEDROCK_KB_RERANKING_ENABLED": "false"
+ },
+ "disabled": false,
+ "autoApprove": []
+ },
+ "aws-knowledge-mcp-server": {
+ "command": "uvx",
+ "args": [
+ "mcp-proxy",
+ "--transport",
+ "streamablehttp",
+ "https://knowledge-mcp.global.api.aws"
+ ],
+ "disabled": false
+ }
+ }
+}
\ No newline at end of file
diff --git a/.kiro/steering/dotnet-tech.md b/.kiro/steering/dotnet-tech.md
new file mode 100644
index 00000000000..127c0a3384e
--- /dev/null
+++ b/.kiro/steering/dotnet-tech.md
@@ -0,0 +1,176 @@
+# .NET Technology Stack & Build System
+
+## .NET 3.5+ Development Environment
+
+### Build Tools & Dependencies
+- **Build System**: dotnet CLI
+- **Package Manager**: NuGet
+- **Testing Framework**: xUnit
+- **Code Formatting**: dotnet-format
+- **SDK Version**: AWS SDK for .NET
+- **.NET Version**: .NET 3.5+ (recommended .NET 6+)
+
+### Common Build Commands
+
+```bash
+# Build and Package
+dotnet build SOLUTION.sln # Build solution
+dotnet build PROJECT.csproj # Build specific project
+dotnet clean # Clean build artifacts
+
+# Testing
+dotnet test # Run all tests
+dotnet test --filter Category=Integration # Run integration tests
+dotnet test --logger trx # Run tests with detailed output
+
+# Execution
+dotnet run # Run project
+dotnet run --project PROJECT.csproj # Run specific project
+
+# Code Quality
+dotnet format # Format code
+```
+
+### .NET-Specific Pattern Requirements
+
+#### File Naming Conventions
+- Use PascalCase for class names and file names
+- Service prefix pattern: `{Service}Actions.cs` (e.g., `S3Actions.cs`)
+- Hello scenarios: `Hello{Service}.cs` (e.g., `HelloS3.cs`)
+- Test files: `{Service}Tests.cs`
+
+#### Hello Scenario Structure
+- **Class naming**: `Hello{Service}.cs` class with main method
+- **Method structure**: Static Main method as entry point
+- **Documentation**: Include XML documentation explaining the hello example purpose
+
+#### Code Structure Standards
+- **Namespace naming**: Use reverse domain notation (e.g., `Amazon.DocSamples.S3`)
+- **Class structure**: One public class per file matching filename
+- **Method naming**: Use PascalCase for method names
+- **Properties**: Use PascalCase for property names
+- **Constants**: Use PascalCase for constants
+- **Async methods**: Suffix with `Async` (e.g., `ListBucketsAsync`)
+
+#### Error Handling Patterns
+```csharp
+using Amazon.S3;
+using Amazon.S3.Model;
+using System;
+using System.Threading.Tasks;
+
+public class ExampleClass
+{
+ public async Task ExampleMethodAsync()
+ {
+ var s3Client = new AmazonS3Client();
+
+ try
+ {
+ var response = await s3Client.ListBucketsAsync();
+ // Process response
+ Console.WriteLine($"Found {response.Buckets.Count} buckets");
+ }
+ catch (AmazonS3Exception e)
+ {
+ // Handle S3-specific exceptions
+ Console.WriteLine($"S3 Error: {e.Message}");
+ Console.WriteLine($"Error Code: {e.ErrorCode}");
+ throw;
+ }
+ catch (Exception e)
+ {
+ // Handle general exceptions
+ Console.WriteLine($"Error: {e.Message}");
+ throw;
+ }
+ finally
+ {
+ s3Client?.Dispose();
+ }
+ }
+}
+```
+
+#### Testing Standards
+- **Test framework**: Use xUnit attributes (`[Fact]`, `[Theory]`)
+- **Integration tests**: Mark with `[Trait("Category", "Integration")]`
+- **Async testing**: Use `async Task` for async test methods
+- **Resource management**: Use `using` statements for AWS clients
+- **Test naming**: Use descriptive method names explaining test purpose
+
+#### Project Structure
+```
+src/
+├── {Service}Examples/
+│ ├── Hello{Service}.cs
+│ ├── {Service}Actions.cs
+│ ├── {Service}Scenarios.cs
+│ └── {Service}Examples.csproj
+└── {Service}Examples.Tests/
+ ├── {Service}Tests.cs
+ └── {Service}Examples.Tests.csproj
+```
+
+#### Documentation Requirements
+- **XML documentation**: Use `///` for class and method documentation
+- **Parameter documentation**: Document all parameters with ``
+- **Return documentation**: Document return values with ``
+- **Exception documentation**: Document exceptions with ``
+- **README sections**: Include dotnet setup and execution instructions
+
+### AWS Credentials Handling
+
+#### Critical Credential Testing Protocol
+- **CRITICAL**: Before assuming AWS credential issues, always test credentials first with `aws sts get-caller-identity`
+- **NEVER** assume credentials are incorrect without verification
+- If credentials test passes but .NET SDK fails, investigate SDK-specific credential chain issues
+- Common .NET SDK credential issues: EC2 instance metadata service conflicts, credential provider chain order
+
+#### Credential Chain Configuration
+```csharp
+// Explicit credential chain setup
+var chain = new CredentialProfileStoreChain();
+if (chain.TryGetAWSCredentials("default", out var credentials))
+{
+ var config = new AmazonS3Config();
+ var client = new AmazonS3Client(credentials, config);
+}
+```
+
+### Build Troubleshooting
+
+#### DotNetV4 Build Troubleshooting
+- **CRITICAL**: When you get a response that the project file does not exist, use `listDirectory` to find the correct project/solution file path before trying to build again
+- **NEVER** repeatedly attempt the same build command without first locating the actual file structure
+- Always verify file existence with directory listing before executing build commands
+
+### Language-Specific Pattern Errors to Avoid
+- ❌ **NEVER create examples for dotnetv3 UNLESS explicitly instructed to by the user**
+- ❌ **NEVER use camelCase for .NET class or method names**
+- ❌ **NEVER forget to dispose AWS clients (use using statements)**
+- ❌ **NEVER ignore proper exception handling for AWS operations**
+- ❌ **NEVER skip NuGet package management**
+- ❌ **NEVER assume credentials without testing first**
+
+### Best Practices
+- ✅ **ALWAYS follow the established .NET project structure**
+- ✅ **ALWAYS use PascalCase for .NET identifiers**
+- ✅ **ALWAYS use using statements for AWS client management**
+- ✅ **ALWAYS include proper exception handling for AWS service calls**
+- ✅ **ALWAYS test AWS credentials before assuming credential issues**
+- ✅ **ALWAYS include comprehensive XML documentation**
+- ✅ **ALWAYS use async/await patterns for AWS operations**
+
+### Project Configuration Requirements
+- **Target Framework**: Specify appropriate .NET version in .csproj
+- **AWS SDK packages**: Include specific AWS service NuGet packages
+- **Test packages**: Include xUnit and test runner packages
+- **Configuration**: Support for appsettings.json and environment variables
+
+### Integration with Knowledge Base
+Before creating .NET code examples:
+1. Query `coding-standards-KB` for "DotNet-code-example-standards"
+2. Query `DotNet-premium-KB` for "DotNet implementation patterns"
+3. Follow KB-documented patterns for project structure and class organization
+4. Validate against existing .NET examples only after KB consultation
\ No newline at end of file
diff --git a/.kiro/steering/java-tech.md b/.kiro/steering/java-tech.md
new file mode 100644
index 00000000000..b83bae02edb
--- /dev/null
+++ b/.kiro/steering/java-tech.md
@@ -0,0 +1,133 @@
+# Java Technology Stack & Build System
+
+## Java v2 Development Environment
+
+### Build Tools & Dependencies
+- **Build System**: Apache Maven
+- **Testing Framework**: JUnit 5
+- **Build Plugin**: Apache Maven Shade Plugin
+- **SDK Version**: AWS SDK for Java v2
+- **Java Version**: Java 8+ (recommended Java 11+)
+
+### Common Build Commands
+
+```bash
+# Build and Package
+mvn clean compile # Compile source code
+mvn package # Build with dependencies
+mvn clean package # Clean and build
+
+# Testing
+mvn test # Run all tests
+mvn test -Dtest=ClassName # Run specific test class
+mvn test -Dtest=ClassName#methodName # Run specific test method
+
+# Execution
+java -cp target/PROJECT-1.0-SNAPSHOT.jar com.example.Main
+mvn exec:java -Dexec.mainClass="com.example.Main"
+```
+
+### Java-Specific Pattern Requirements
+
+#### File Naming Conventions
+- Use PascalCase for class names
+- Service prefix pattern: `{Service}Action.java` (e.g., `S3ListBuckets.java`)
+- Hello scenarios: `Hello{Service}.java` (e.g., `HelloS3.java`)
+- Test files: `{Service}ActionTest.java`
+
+#### Hello Scenario Structure
+- **Class naming**: `Hello{Service}.java` class with main method
+- **Method structure**: Static main method as entry point
+- **Documentation**: Include Javadoc explaining the hello example purpose
+
+#### Code Structure Standards
+- **Package naming**: Use reverse domain notation (e.g., `com.example.s3`)
+- **Class structure**: One public class per file matching filename
+- **Method naming**: Use camelCase for method names
+- **Constants**: Use UPPER_SNAKE_CASE for static final variables
+- **Imports**: Group imports logically (Java standard, AWS SDK, other libraries)
+
+#### Error Handling Patterns
+```java
+import software.amazon.awssdk.services.s3.S3Client;
+import software.amazon.awssdk.core.exception.SdkException;
+import software.amazon.awssdk.services.s3.model.S3Exception;
+
+public class ExampleClass {
+ public void exampleMethod() {
+ try (S3Client s3Client = S3Client.builder().build()) {
+ // AWS service call
+ var response = s3Client.operation();
+ // Process response
+ } catch (S3Exception e) {
+ // Handle service-specific exceptions
+ System.err.println("S3 Error: " + e.awsErrorDetails().errorMessage());
+ throw e;
+ } catch (SdkException e) {
+ // Handle general SDK exceptions
+ System.err.println("SDK Error: " + e.getMessage());
+ throw e;
+ }
+ }
+}
+```
+
+#### Testing Standards
+- **Test framework**: Use JUnit 5 annotations (`@Test`, `@BeforeEach`, `@AfterEach`)
+- **Integration tests**: Mark with `@Tag("integration")` or similar
+- **Resource management**: Use try-with-resources for AWS clients
+- **Assertions**: Use JUnit 5 assertion methods
+- **Test naming**: Use descriptive method names explaining test purpose
+
+#### Maven Project Structure
+```
+src/
+├── main/
+│ └── java/
+│ └── com/
+│ └── example/
+│ └── {service}/
+│ ├── Hello{Service}.java
+│ ├── {Service}Actions.java
+│ └── {Service}Scenario.java
+└── test/
+ └── java/
+ └── com/
+ └── example/
+ └── {service}/
+ └── {Service}Test.java
+```
+
+#### Documentation Requirements
+- **Class Javadoc**: Include purpose, usage examples, and prerequisites
+- **Method Javadoc**: Document parameters, return values, and exceptions
+- **Inline comments**: Explain complex AWS service interactions
+- **README sections**: Include Maven setup and execution instructions
+
+### Language-Specific Pattern Errors to Avoid
+- ❌ **NEVER assume class naming without checking existing examples**
+- ❌ **NEVER use snake_case for Java class or method names**
+- ❌ **NEVER forget to close AWS clients (use try-with-resources)**
+- ❌ **NEVER ignore proper exception handling for AWS operations**
+- ❌ **NEVER skip Maven dependency management**
+
+### Best Practices
+- ✅ **ALWAYS follow the established Maven project structure**
+- ✅ **ALWAYS use PascalCase for class names and camelCase for methods**
+- ✅ **ALWAYS use try-with-resources for AWS client management**
+- ✅ **ALWAYS include proper exception handling for AWS service calls**
+- ✅ **ALWAYS follow Java naming conventions and package structure**
+- ✅ **ALWAYS include comprehensive Javadoc documentation**
+
+### Maven Configuration Requirements
+- **AWS SDK BOM**: Include AWS SDK Bill of Materials for version management
+- **Compiler plugin**: Configure for appropriate Java version
+- **Shade plugin**: For creating executable JARs with dependencies
+- **Surefire plugin**: For test execution configuration
+
+### Integration with Knowledge Base
+Before creating Java code examples:
+1. Query `coding-standards-KB` for "Java-code-example-standards"
+2. Query `Java-premium-KB` for "Java implementation patterns"
+3. Follow KB-documented patterns for Maven structure and class organization
+4. Validate against existing Java examples only after KB consultation
\ No newline at end of file
diff --git a/.kiro/steering/javascript-tech.md b/.kiro/steering/javascript-tech.md
new file mode 100644
index 00000000000..df3e5152416
--- /dev/null
+++ b/.kiro/steering/javascript-tech.md
@@ -0,0 +1,188 @@
+# JavaScript Technology Stack & Build System
+
+## JavaScript/Node.js Development Environment
+
+### Build Tools & Dependencies
+- **Runtime**: Node.js (LTS version recommended)
+- **Package Manager**: npm
+- **Testing Framework**: Jest
+- **Code Formatting**: Prettier
+- **Linting**: Biome (or ESLint)
+- **SDK Version**: AWS SDK for JavaScript v3
+
+### Common Build Commands
+
+```bash
+# Dependencies
+npm install # Install dependencies
+npm ci # Clean install from package-lock.json
+
+# Testing
+npm test # Run all tests
+npm run test:unit # Run unit tests
+npm run test:integration # Run integration tests
+
+# Code Quality
+npm run lint # Lint code
+npm run format # Format code with Prettier
+
+# Execution
+node src/hello-{service}.js # Run hello scenario
+npm start # Run main application
+```
+
+### JavaScript-Specific Pattern Requirements
+
+#### File Naming Conventions
+- Use kebab-case for file names
+- Service prefix pattern: `{service}-action.js` (e.g., `s3-list-buckets.js`)
+- Hello scenarios: `hello-{service}.js` (e.g., `hello-s3.js`)
+- Test files: `{service}-action.test.js`
+
+#### Hello Scenario Structure
+- **File naming**: `hello-{service}.js` or hello function in main module
+- **Function structure**: Async function as main entry point
+- **Documentation**: Include JSDoc comments explaining the hello example purpose
+
+#### Code Structure Standards
+- **Module system**: Use ES6 modules (import/export) or CommonJS (require/module.exports)
+- **Function naming**: Use camelCase for function names
+- **Constants**: Use UPPER_SNAKE_CASE for constants
+- **Classes**: Use PascalCase for class names
+- **Async/Await**: Use async/await for asynchronous operations
+
+#### Error Handling Patterns
+```javascript
+import { S3Client, ListBucketsCommand } from "@aws-sdk/client-s3";
+
+const client = new S3Client({ region: "us-east-1" });
+
+async function listBuckets() {
+ try {
+ const command = new ListBucketsCommand({});
+ const response = await client.send(command);
+
+ console.log("Buckets:", response.Buckets);
+ return response.Buckets;
+ } catch (error) {
+ if (error.name === "NoSuchBucket") {
+ console.error("Bucket not found:", error.message);
+ } else if (error.name === "AccessDenied") {
+ console.error("Access denied:", error.message);
+ } else {
+ console.error("AWS SDK Error:", error.message);
+ }
+ throw error;
+ }
+}
+
+export { listBuckets };
+```
+
+#### Testing Standards
+- **Test framework**: Use Jest with appropriate matchers
+- **Integration tests**: Mark with appropriate test descriptions
+- **Async testing**: Use async/await in test functions
+- **Mocking**: Use Jest mocks for unit tests when appropriate
+- **Test naming**: Use descriptive test names explaining test purpose
+
+#### Project Structure
+```
+src/
+├── hello-{service}.js
+├── {service}-actions.js
+├── {service}-scenarios.js
+└── tests/
+ ├── {service}-actions.test.js
+ └── {service}-integration.test.js
+```
+
+#### Package.json Configuration
+```json
+{
+ "name": "{service}-examples",
+ "version": "1.0.0",
+ "type": "module",
+ "scripts": {
+ "test": "jest",
+ "test:unit": "jest --testPathPattern=unit",
+ "test:integration": "jest --testPathPattern=integration",
+ "lint": "biome check .",
+ "format": "prettier --write ."
+ },
+ "dependencies": {
+ "@aws-sdk/client-{service}": "^3.0.0",
+ "@aws-sdk/credential-providers": "^3.0.0"
+ },
+ "devDependencies": {
+ "jest": "^29.0.0",
+ "prettier": "^3.0.0",
+ "@biomejs/biome": "^1.0.0"
+ }
+}
+```
+
+#### Documentation Requirements
+- **JSDoc comments**: Use `/**` for function and class documentation
+- **Parameter documentation**: Document parameters with `@param`
+- **Return documentation**: Document return values with `@returns`
+- **Example documentation**: Include `@example` blocks
+- **README sections**: Include npm setup and execution instructions
+
+### AWS SDK v3 Specific Patterns
+
+#### Client Configuration
+```javascript
+import { S3Client } from "@aws-sdk/client-s3";
+import { fromEnv } from "@aws-sdk/credential-providers";
+
+const client = new S3Client({
+ region: process.env.AWS_REGION || "us-east-1",
+ credentials: fromEnv(), // Optional: explicit credential provider
+});
+```
+
+#### Command Pattern Usage
+```javascript
+import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
+
+const client = new S3Client({ region: "us-east-1" });
+
+async function uploadObject(bucketName, key, body) {
+ const command = new PutObjectCommand({
+ Bucket: bucketName,
+ Key: key,
+ Body: body,
+ });
+
+ return await client.send(command);
+}
+```
+
+### Language-Specific Pattern Errors to Avoid
+- ❌ **NEVER use snake_case for JavaScript identifiers**
+- ❌ **NEVER forget to handle Promise rejections**
+- ❌ **NEVER mix callback and Promise patterns**
+- ❌ **NEVER ignore proper error handling for AWS operations**
+- ❌ **NEVER skip npm dependency management**
+
+### Best Practices
+- ✅ **ALWAYS use kebab-case for file names**
+- ✅ **ALWAYS use camelCase for JavaScript identifiers**
+- ✅ **ALWAYS use async/await for asynchronous operations**
+- ✅ **ALWAYS include proper error handling for AWS service calls**
+- ✅ **ALWAYS use AWS SDK v3 command pattern**
+- ✅ **ALWAYS include comprehensive JSDoc documentation**
+- ✅ **ALWAYS handle environment variables for configuration**
+
+### Environment Configuration
+- **AWS Region**: Use `AWS_REGION` environment variable
+- **Credentials**: Support AWS credential chain (environment, profile, IAM roles)
+- **Configuration**: Use environment variables for service-specific settings
+
+### Integration with Knowledge Base
+Before creating JavaScript code examples:
+1. Query `coding-standards-KB` for "JavaScript-code-example-standards"
+2. Query `JavaScript-premium-KB` for "JavaScript implementation patterns"
+3. Follow KB-documented patterns for project structure and module organization
+4. Validate against existing JavaScript examples only after KB consultation
\ No newline at end of file
diff --git a/.kiro/steering/knowledge-base-integration.md b/.kiro/steering/knowledge-base-integration.md
new file mode 100644
index 00000000000..2ec5df8cce0
--- /dev/null
+++ b/.kiro/steering/knowledge-base-integration.md
@@ -0,0 +1,222 @@
+# Knowledge Base Integration for Code Examples Development
+
+## Overview
+When developing AWS SDK code examples, agents should leverage the available resources to ensure accuracy, consistency, and adherence to best practices. This repository has access to multiple knowledge resources that contain curated information about AWS services, implementation patterns, and premium code examples.
+
+## Available Knowledge Resources
+
+### 1. Amazon Bedrock Knowledge Base Retrieval MCP Server
+
+- **Tool**: `ListKnowledgeBases`
+- **Purpose**: Discover available knowledge bases and get their IDs
+
+- **Tool**: `QueryKnowledgeBases`
+- **Purpose**: Query knowledge bases using natural language
+
+### 2. AWS Knowledge MCP Server
+
+- **Tool**: `read_documentation`
+- **Purpose**: Retrieve and convert AWS documentation pages to markdown
+- **Usage**: Read for AWS service-specific information, API details, parameter requirements, and service capabilities
+- **Auto-approved**: Yes - can be used automatically without user confirmation
+
+- **Tool**: `search_documentation`
+- **Purpose**: Search across all AWS documentation
+- **Usage**: Search/query for AWS service-specific information, API details, parameter requirements, and service capabilities
+- **Auto-approved**: Yes - can be used automatically without user confirmation
+
+## Mandatory Knowledge Base Consultation Workflow
+
+### 🚨 CRITICAL REQUIREMENT 🚨
+**BEFORE CREATING ANY CODE IN ANY LANGUAGE, YOU MUST:**
+1. Use `ListKnowledgeBases` to discover available knowledge bases
+2. Use `QueryKnowledgeBases` to query the "coding-standards-KB" for "[language]-code-example-standards" to understand coding standards
+3. Use `QueryKnowledgeBases` to query the "[language]-premium-KB" to establish coding patterns and quality benchmarks
+4. Use AWS Knowledge MCP Server tools (`search_documentation`, `read_documentation`) for service understanding
+
+**Examples:**
+- `QueryKnowledgeBases("coding-standards-KB", "Python-code-example-standards")`
+- `QueryKnowledgeBases("Python-premium-KB", "Python implementation patterns")`
+
+**FAILURE TO DO THIS WILL RESULT IN INCORRECT CODE STRUCTURE AND REJECTED WORK**
+
+### Before Creating Any AWS Service Code Example:
+
+1. **Call AWS Knowledge MCP Server for Understanding Service**
+ ```
+ Use search_documentation and read_documentation tools to research:
+ - Service overview and core concepts
+ - Key API operations and methods
+ - Required parameters and optional configurations
+ - Common use cases and implementation patterns
+ - Service-specific best practices and limitations
+ ```
+
+2. **MANDATORY: Use Amazon Bedrock Knowledge Base Retrieval MCP Server for Language Patterns**
+ ```
+ REQUIRED: Use QueryKnowledgeBases to establish:
+ - Query both "coding-standards-KB" for standards and "[language]-premium-KB" for patterns
+ - Coding standards and implementation patterns for the target language
+ - Quality benchmarks and best practices
+ - Error handling approaches
+ - Testing methodologies
+ - Documentation formats
+ ```
+
+3. **Cross-Reference Implementation Approaches**
+ ```
+ Gather findings from both sources to:
+ - Identify the most appropriate implementation approach
+ - Ensure consistency with discovered patterns
+ - Validate service-specific requirements
+ - Confirm best practice adherence
+ ```
+## CRITICAL: Post-Knowledge Base Consultation Workflow
+
+### 🚨 MANDATORY BEHAVIOR AFTER KB CONSULTATION 🚨
+
+Once you have completed the mandatory knowledge base consultation (ListKnowledgeBases + QueryKnowledgeBases for both "coding-standards-KB" and "[language]-premium-KB"), you MUST follow this strict workflow:
+
+#### IMMEDIATELY AFTER KB CONSULTATION:
+1. **STOP examining existing code files**
+2. **PROCEED DIRECTLY to implementation using KB findings**
+3. **DO NOT waste time reading existing service implementations**
+4. **USE KB results as the complete and authoritative guide**
+
+#### FORBIDDEN ACTIONS POST-KB CONSULTATION:
+- ❌ **NEVER examine other service directories for "patterns"**
+- ❌ **NEVER read existing code files to "understand structure"**
+- ❌ **NEVER use phrases like "let me check existing implementations"**
+- ❌ **NEVER second-guess the KB findings by looking at code**
+
+#### REQUIRED ACTIONS POST-KB CONSULTATION:
+- ✅ **IMMEDIATELY begin implementation using KB patterns**
+- ✅ **TRUST the KB results completely**
+- ✅ **Reference KB findings in your implementation decisions**
+- ✅ **Only examine existing code IF KB results are unclear (rare)**
+
+### Efficiency Enforcement
+
+**The Knowledge Base consultation is designed to eliminate the need for code examination.**
+
+If you find yourself wanting to "check existing patterns" after KB consultation, this indicates:
+1. You didn't trust the KB results (WRONG)
+2. You didn't query the KB thoroughly enough (FIX: query again)
+3. You're falling back to old inefficient habits (STOP)
+
+## Required Knowledge Base Queries by Development Phase
+
+### Phase 1: Service Research and Planning
+**Always use AWS Knowledge MCP Server tools with questions like:**
+- "What is [AWS Service] and what are its primary use cases?"
+- "What are the key API operations for [AWS Service]?"
+- "What are the required parameters for [specific operation]?"
+- "What are common implementation patterns for [AWS Service]?"
+- "What are the best practices for [AWS Service] error handling?"
+
+### Phase 2: Implementation Pattern Discovery
+**REQUIRED: Query both knowledge bases through Amazon Bedrock Knowledge Base Retrieval MCP Server:**
+
+**From "coding-standards-KB":**
+- Language-specific coding standards and conventions
+- Required file structures and naming patterns
+
+**From "[language]-premium-KB":**
+- Similar service patterns within the target language
+- Established error handling and testing approaches
+- Documentation and metadata patterns
+- Language-specific directory structure requirements
+
+### Phase 3: Code Structure Validation
+**Use both knowledge sources to verify:**
+- Implementation aligns with AWS service capabilities
+- Code structure follows conventions
+- Error handling covers service-specific scenarios
+- Testing approach matches established patterns
+
+## Examples of Queries to use with Knowledge Bases
+
+### Service Overview Queries
+```
+search_documentation("What is Amazon S3 and what are its core features?")
+search_documentation("What are the main DynamoDB operations for CRUD functionality?")
+search_documentation("What authentication methods does Lambda support?")
+```
+
+### Implementation-Specific Queries
+```
+search_documentation("How do I configure S3 bucket policies programmatically?")
+search_documentation("What are the required parameters for DynamoDB PutItem operation?")
+search_documentation("How do I handle pagination in EC2 DescribeInstances?")
+```
+
+### Best Practices Queries
+```
+search_documentation("What are S3 security best practices for SDK implementations?")
+search_documentation("How should I handle DynamoDB throttling in production code?")
+search_documentation("What are Lambda function timeout considerations?")
+```
+
+## Integration Requirements
+
+### For All Code Example Development:
+1. **MANDATORY Amazon Bedrock Knowledge Base consultation for Standards**: Every code example must begin with using `ListKnowledgeBases` and `QueryKnowledgeBases` to search for "[language] code example standards" in the "coding-standards-KB" knowledge base
+2. **MANDATORY AWS Knowledge MCP Server consultation**: Every code example must begin with AWS service research using `search_documentation` and `read_documentation` tools
+3. **MANDATORY Amazon Bedrock Knowledge Base consultation for Language Patterns**: Every code example must use `ListKnowledgeBases` and `QueryKnowledgeBases` to query the "[language]-premium-KB" knowledge base to establish coding standards, understand implementation patterns, and define quality benchmarks for the target language.
+4. **Document KB findings**: Include relevant information from KB queries in code comments
+5. **Validate against KB**: Ensure final implementation aligns with KB recommendations
+6. **Reference KB sources**: When applicable, reference specific KB insights in documentation
+
+**CRITICAL**: You CANNOT create language-specific code without first consulting the Amazon Bedrock Knowledge Base Retrieval MCP Server for that language's standards.
+
+### For Service-Specific Examples:
+1. **Service capability verification**: Confirm all used features are supported by the service using AWS Knowledge MCP Server tools
+2. **Parameter validation**: Verify all required parameters are included and optional ones are documented using AWS documentation tools
+3. **Error scenario coverage**: Include error handling for service-specific failure modes identified through AWS Knowledge MCP Server research
+4. **Best practice adherence**: Follow service-specific best practices identified through AWS Knowledge MCP Server and Amazon Bedrock Knowledge Base research
+
+### For Cross-Service Examples:
+1. **Service interaction patterns**: Research how services integrate with each other using AWS Knowledge MCP Server tools
+2. **Data flow validation**: Ensure data formats are compatible between services using AWS documentation research
+3. **Authentication consistency**: Verify authentication approaches work across all services using AWS Knowledge MCP Server tools
+4. **Performance considerations**: Research any service-specific performance implications using AWS documentation tools
+
+## Quality Assurance Through Knowledge Base
+
+### Before Code Completion:
+- [ ] **MANDATORY**: Used `ListKnowledgeBases` and `QueryKnowledgeBases` for "coding-standards-KB" to get "[language] code example standards"
+- [ ] **MANDATORY**: Used `QueryKnowledgeBases` for "[language]-premium-KB" to establish coding patterns and quality benchmarks
+- [ ] **MANDATORY**: Used AWS Knowledge MCP Server tools (`search_documentation`, `read_documentation`) for comprehensive service understanding
+- [ ] Validated implementation against KB recommendations from all knowledge sources
+- [ ] Confirmed error handling covers scenarios identified through AWS Knowledge MCP Server research
+- [ ] Verified best practices from both Amazon Bedrock Knowledge Base sources are implemented
+- [ ] **CRITICAL**: Confirmed code structure follows language-specific standards from both knowledge base sources
+
+### Documentation Requirements:
+- Include service descriptions sourced from AWS Knowledge MCP Server tools in code comments
+- Reference specific insights from Amazon Bedrock Knowledge Base queries in README files
+- Document any deviations from KB recommendations with justification
+- Provide parameter explanations validated through AWS documentation tools
+
+## Troubleshooting with Knowledge Base
+
+### When Implementation Issues Arise:
+1. **Use AWS Knowledge MCP Server tools** for service-specific troubleshooting guidance
+2. **Query Amazon Bedrock Knowledge Base** for similar issues and their resolutions in the target language
+3. **Cross-reference solutions** to ensure compatibility with repository patterns
+4. **Validate fixes** against recommendations from both knowledge sources before implementation
+
+### Common Troubleshooting Queries:
+```
+# AWS Knowledge MCP Server queries
+search_documentation("Common errors when working with [AWS Service] and how to resolve them")
+search_documentation("Why might [specific operation] fail and how to handle it?")
+read_documentation("https://docs.aws.amazon.com/[service]/latest/[relevant-page]")
+
+# Amazon Bedrock Knowledge Base queries
+QueryKnowledgeBases("coding-standards-KB", "[language] error handling standards")
+QueryKnowledgeBases("[language]-premium-KB", "error handling examples in [language]")
+QueryKnowledgeBases("[language]-premium-KB", "troubleshooting methods in [language]")
+```
+
+This knowledge base integration ensures that all code examples are built on a foundation of accurate, comprehensive AWS service knowledge while maintaining consistency with established repository patterns and best practices.
\ No newline at end of file
diff --git a/.kiro/steering/product.md b/.kiro/steering/product.md
new file mode 100644
index 00000000000..4c240ded85b
--- /dev/null
+++ b/.kiro/steering/product.md
@@ -0,0 +1,16 @@
+# AWS SDK Code Examples
+
+This repository contains comprehensive code examples demonstrating how to use AWS SDKs across multiple programming languages to interact with AWS services. The examples are designed to be injected into AWS Documentation and serve as practical learning resources for developers.
+
+## Purpose
+- Provide working code examples for AWS SDK usage across 12+ programming languages
+- Support AWS Documentation with tested, production-ready code snippets
+- Demonstrate best practices for AWS service integration
+- Offer single-service actions, multi-service scenarios, and cross-service applications
+
+## Key Features
+- Multi-language SDK support (Python, Java, .NET, JavaScript, Go, Kotlin, PHP, Ruby, Rust, Swift, C++, SAP ABAP)
+- Three types of examples: single-service actions, scenarios, and cross-service applications
+- Automated testing and validation framework
+- Docker containerization for isolated development environments
+- Integration with AWS Documentation generation pipeline
\ No newline at end of file
diff --git a/.kiro/steering/python-tech.md b/.kiro/steering/python-tech.md
new file mode 100644
index 00000000000..5b378e4791c
--- /dev/null
+++ b/.kiro/steering/python-tech.md
@@ -0,0 +1,561 @@
+# Python Technology Stack & Build System
+
+## 🚨 READ THIS FIRST - MANDATORY WORKFLOW 🚨
+
+**BEFORE CREATING ANY PYTHON CODE FOR AWS SERVICES:**
+
+1. **FIRST**: Run knowledge base consultation (ListKnowledgeBases + QueryKnowledgeBases)
+2. **SECOND**: Create service stubber in `python/test_tools/{service}_stubber.py`
+3. **THIRD**: Add stubber to `python/test_tools/stubber_factory.py`
+4. **FOURTH**: Create conftest.py with ScenarioData class
+5. **FIFTH**: Create implementation files with complete AWS data structures
+6. **SIXTH**: Run ALL mandatory commands (pytest, black, pylint, writeme)
+
+**❌ SKIPPING ANY STEP = REJECTED CODE**
+**❌ WRONG ORDER = REJECTED CODE**
+**❌ INCOMPLETE DATA STRUCTURES = REJECTED CODE**
+
+## Python 3.6+ Development Environment
+
+### Build Tools & Dependencies
+- **Package Manager**: pip
+- **Virtual Environment**: venv
+- **Testing Framework**: pytest
+- **Code Formatting**: black
+- **Linting**: pylint, flake8
+- **Type Checking**: mypy (where applicable)
+
+### Common Build Commands
+
+```bash
+# Environment Setup
+python -m venv .venv
+source .venv/bin/activate # Linux/macOS
+.venv\Scripts\activate # Windows
+
+# Dependencies
+pip install -r requirements.txt
+
+# Testing
+python -m pytest -m "not integ" # Unit tests
+python -m pytest -m "integ" # Integration tests
+
+# Code Quality
+black . # Format code
+pylint --rcfile=.github/linters/.python-lint .
+```
+
+## 🚨 CRITICAL: MANDATORY WORKFLOW BEFORE ANY CODE CREATION 🚨
+
+### STEP-BY-STEP MANDATORY SEQUENCE (MUST BE FOLLOWED EXACTLY)
+
+**❌ COMMON MISTAKES THAT LEAD TO REJECTED CODE:**
+- Creating any files before knowledge base consultation
+- Creating conftest.py before service stubber
+- Skipping ScenarioData class for complex services
+- Using incomplete AWS data structures in tests
+- Not running proper pytest markers
+- Skipping code formatting and linting
+
+**✅ CORRECT MANDATORY SEQUENCE:**
+
+**STEP 1: KNOWLEDGE BASE CONSULTATION (REQUIRED FIRST)**
+```bash
+# MUST be done before creating ANY files
+ListKnowledgeBases()
+QueryKnowledgeBases("coding-standards-KB", "Python-code-example-standards")
+QueryKnowledgeBases("Python-premium-KB", "Python implementation patterns")
+```
+
+**STEP 2: CREATE SERVICE STUBBER (REQUIRED BEFORE CONFTEST)**
+```bash
+# Check if python/test_tools/{service}_stubber.py exists
+# If missing, create it following existing patterns
+# Add to python/test_tools/stubber_factory.py
+```
+
+**STEP 3: CREATE CONFTEST.PY WITH SCENARIODATA**
+```bash
+# Create python/example_code/{service}/test/conftest.py
+# MUST include ScenarioData class for complex services
+# Import from test_tools.fixtures.common import *
+```
+
+**STEP 4: CREATE IMPLEMENTATION FILES**
+```bash
+# Create wrapper, hello, scenario files
+# Use complete AWS data structures (not minimal ones)
+```
+
+**STEP 5: MANDATORY TESTING AND QUALITY**
+```bash
+# MUST run these exact commands:
+python -m pytest -m "not integ" # Unit tests
+python -m pytest -m "integ" # Integration tests
+black . # Format code
+pylint --rcfile=.github/linters/.python-lint .
+python -m writeme --languages Python:3 --services {service}
+```
+
+### Python-Specific Pattern Requirements
+
+#### File Naming Conventions
+- Use snake_case for all Python files
+- Service prefix pattern: `{service}_action.py` (e.g., `s3_list_buckets.py`)
+- Scenario files: `{service}_basics.py` or `{service}_scenario.py`
+- Test files: `test_{service}_action.py`
+
+#### Hello Scenario Structure
+- **File naming**: `{service}_hello.py` or hello function in main module
+- **Function naming**: `hello_{service}()` or `main()`
+- **Documentation**: Include docstrings explaining the hello example purpose
+
+#### Scenario Pattern Structure
+**MANDATORY for all scenario files:**
+
+```python
+# scenario_{service}_basics.py structure
+import logging
+import os
+import sys
+from typing import Optional
+
+import boto3
+from botocore.exceptions import ClientError
+
+from {service}_wrapper import {Service}Wrapper
+
+# Add relative path to include demo_tools
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../.."))
+from demo_tools import question as q
+
+logger = logging.getLogger(__name__)
+
+class {Service}Scenario:
+ """Runs an interactive scenario that shows how to use {AWS Service}."""
+
+ def __init__(self, {service}_wrapper: {Service}Wrapper):
+ """
+ :param {service}_wrapper: An instance of the {Service}Wrapper class.
+ """
+ self.{service}_wrapper = {service}_wrapper
+
+ def run_scenario(self):
+ """Runs the {AWS Service} basics scenario."""
+ print("-" * 88)
+ print("Welcome to the {AWS Service} basics scenario!")
+ print("-" * 88)
+
+ try:
+ self._setup_phase()
+ self._demonstration_phase()
+ self._examination_phase()
+ except Exception as e:
+ logger.error(f"Scenario failed: {e}")
+ finally:
+ self._cleanup_phase()
+
+ def _setup_phase(self):
+ """Setup phase implementation."""
+ pass
+
+ def _demonstration_phase(self):
+ """Demonstration phase implementation."""
+ pass
+
+ def _examination_phase(self):
+ """Examination phase implementation."""
+ pass
+
+ def _cleanup_phase(self):
+ """Cleanup phase implementation."""
+ pass
+
+def main():
+ """Runs the {AWS Service} basics scenario."""
+ logging.basicConfig(level=logging.WARNING, format="%(levelname)s: %(message)s")
+
+ try:
+ {service}_wrapper = {Service}Wrapper.from_client()
+ scenario = {Service}Scenario({service}_wrapper)
+ scenario.run_scenario()
+ except Exception as e:
+ logger.error(f"Failed to run scenario: {e}")
+
+if __name__ == "__main__":
+ main()
+```
+
+**Scenario Requirements:**
+- ✅ **ALWAYS** include descriptive comment block at top explaining scenario steps
+- ✅ **ALWAYS** use demo_tools for user interaction
+- ✅ **ALWAYS** implement proper cleanup in finally block
+- ✅ **ALWAYS** break scenario into logical phases
+- ✅ **ALWAYS** include comprehensive error handling
+
+#### Code Structure Standards
+- **Imports**: Follow PEP 8 import ordering (standard library, third-party, local)
+- **Functions**: Use descriptive names with snake_case
+- **Classes**: Use PascalCase for class names
+- **Constants**: Use UPPER_CASE for constants
+- **Type Hints**: Include type annotations where beneficial
+
+#### Wrapper Class Pattern
+**MANDATORY for all services:**
+
+```python
+# {service}_wrapper.py structure
+import logging
+import boto3
+from botocore.exceptions import ClientError
+from typing import Dict, List, Optional, Any
+
+logger = logging.getLogger(__name__)
+
+class {Service}Wrapper:
+ """Encapsulates {AWS Service} functionality."""
+
+ def __init__(self, {service}_client: boto3.client):
+ """
+ :param {service}_client: A Boto3 {AWS Service} client.
+ """
+ self.{service}_client = {service}_client
+
+ @classmethod
+ def from_client(cls):
+ {service}_client = boto3.client("{service}")
+ return cls({service}_client)
+
+ # Individual action methods with proper error handling
+ def action_method(self, param: str) -> Dict[str, Any]:
+ """
+ Action description.
+
+ :param param: Parameter description.
+ :return: Response description.
+ """
+ try:
+ response = self.{service}_client.action_name(Parameter=param)
+ logger.info(f"Action completed successfully")
+ return response
+ except ClientError as e:
+ error_code = e.response["Error"]["Code"]
+ if error_code == "SpecificError":
+ logger.error("Specific error handling")
+ else:
+ logger.error(f"Error in action: {e}")
+ raise
+```
+
+**Wrapper Class Requirements:**
+- ✅ **ALWAYS** include proper logging
+- ✅ **ALWAYS** provide `from_client()` class method
+- ✅ **ALWAYS** handle service-specific errors
+- ✅ **ALWAYS** include comprehensive docstrings
+- ✅ **ALWAYS** use type hints for parameters and returns
+
+#### Testing Convention and Structure
+
+**🚨 MANDATORY Testing Infrastructure Setup (EXACT ORDER REQUIRED):**
+
+**❌ CRITICAL ERROR: Creating conftest.py before service stubber will cause failures**
+
+**✅ CORRECT ORDER:**
+
+1. **FIRST: Create Service Stubber (MANDATORY BEFORE CONFTEST):**
+ - **CRITICAL**: Check if `python/test_tools/{service}_stubber.py` exists
+ - **CRITICAL**: If missing, create it FIRST following existing patterns (e.g., `controltower_stubber.py`)
+ - **CRITICAL**: Inherit from `ExampleStubber` and implement ALL service-specific stub methods
+ - **CRITICAL**: Add to `python/test_tools/stubber_factory.py` import and factory function
+ - **CRITICAL**: Each stub method MUST handle both stubbing and AWS passthrough modes
+
+2. **SECOND: Create Service conftest.py (ONLY AFTER STUBBER EXISTS):**
+ - **CRITICAL**: Create `python/example_code/{service}/test/conftest.py`
+ - **CRITICAL**: Import common fixtures: `from test_tools.fixtures.common import *`
+ - **CRITICAL**: Add path configuration: `sys.path.append("../..")`
+ - **CRITICAL**: For complex services, MUST create ScenarioData class (not optional)
+ - **CRITICAL**: Use complete AWS data structures in tests (not minimal ones)
+
+3. **Test File Structure:**
+ - **Unit Tests**: `test_{service}_wrapper.py` - Test wrapper class methods with mocked responses
+ - **Integration Tests**: `test_{service}_integration.py` - Test against real AWS services (marked with `@pytest.mark.integ`)
+ - **Scenario Tests**: `test_{service}_scenario.py` - Test complete scenarios end-to-end
+
+**Testing Pattern Examples:**
+
+```python
+# Simple conftest.py pattern
+import sys
+sys.path.append("../..")
+from test_tools.fixtures.common import *
+
+# Complex conftest.py with ScenarioData class
+class ScenarioData:
+ def __init__(self, service_client, service_stubber):
+ self.service_client = service_client
+ self.service_stubber = service_stubber
+ self.wrapper = ServiceWrapper(service_client)
+
+@pytest.fixture
+def scenario_data(make_stubber):
+ client = boto3.client("service")
+ stubber = make_stubber(client)
+ return ScenarioData(client, stubber)
+```
+
+**🚨 CRITICAL Testing Requirements:**
+- ✅ **ALWAYS** use the centralized `test_tools` infrastructure
+- ✅ **ALWAYS** support both stubbed and real AWS testing modes
+- ✅ **ALWAYS** mark integration tests with `@pytest.mark.integ`
+- ✅ **ALWAYS** ensure proper cleanup in integration tests
+- ✅ **ALWAYS** use the `make_stubber` fixture for consistent stubbing
+- ✅ **CRITICAL**: Use COMPLETE AWS data structures in tests (see below)
+
+**🚨 CRITICAL: Complete AWS Data Structures Required**
+
+**❌ COMMON MISTAKE: Using minimal test data that fails validation**
+```python
+# WRONG - Missing required AWS fields
+findings = [{"Id": "finding-1", "Type": "SomeType", "Severity": 8.0}]
+```
+
+**✅ CORRECT: Complete AWS data structures**
+```python
+# RIGHT - All required AWS fields included
+findings = [{
+ "Id": "finding-1",
+ "AccountId": "123456789012",
+ "Arn": "arn:aws:service:region:account:resource/id",
+ "Type": "SomeType",
+ "Severity": 8.0,
+ "CreatedAt": "2023-01-01T00:00:00.000Z",
+ "UpdatedAt": "2023-01-01T00:00:00.000Z",
+ "Region": "us-east-1",
+ "SchemaVersion": "2.0",
+ "Resource": {"ResourceType": "Instance"}
+}]
+```
+
+**CRITICAL**: Always check AWS API documentation for required fields before creating test data.
+
+#### Error Handling Patterns
+```python
+import boto3
+from botocore.exceptions import ClientError, NoCredentialsError
+
+def example_function():
+ try:
+ # AWS service call
+ response = client.operation()
+ return response
+ except ClientError as e:
+ error_code = e.response['Error']['Code']
+ if error_code == 'SpecificError':
+ # Handle specific error
+ pass
+ else:
+ # Handle general client errors
+ raise
+ except NoCredentialsError:
+ # Handle credential issues
+ raise
+```
+
+#### Testing Standards
+- **Test markers**: Use `@pytest.mark.integ` for integration tests
+- **Fixtures**: Create reusable fixtures for AWS resources
+- **Cleanup**: Ensure proper resource cleanup in tests
+- **Mocking**: Use `boto3` stubber for unit tests when appropriate
+
+#### Requirements File Pattern
+**MANDATORY for every service directory:**
+
+```txt
+# requirements.txt - minimum versions
+boto3>=1.26.137
+botocore>=1.29.137
+```
+
+**Requirements Guidelines:**
+- ✅ **ALWAYS** specify minimum compatible versions
+- ✅ **ALWAYS** include both boto3 and botocore
+- ✅ **ALWAYS** test with specified minimum versions
+- ✅ **NEVER** pin to exact versions unless absolutely necessary
+
+#### Documentation Requirements
+- **Module docstrings**: Include purpose and usage examples
+- **Function docstrings**: Follow Google or NumPy docstring format
+- **Inline comments**: Explain complex AWS service interactions
+- **README sections**: Include setup instructions and prerequisites
+- **Snippet tags**: Include proper snippet tags for documentation generation
+
+### Language-Specific Pattern Errors to Avoid
+- ❌ **NEVER create scenarios without checking existing patterns**
+- ❌ **NEVER use camelCase for Python variables or functions**
+- ❌ **NEVER ignore proper exception handling for AWS operations**
+- ❌ **NEVER skip virtual environment setup**
+
+### 🚨 MANDATORY COMPLETION CHECKLIST
+
+**❌ WORK IS NOT COMPLETE UNTIL ALL THESE COMMANDS PASS:**
+
+```bash
+# 1. MANDATORY: Unit tests must pass
+python -m pytest -m "not integ" -v
+
+# 2. MANDATORY: Integration tests must pass
+python -m pytest -m "integ" -v
+
+# 3. MANDATORY: Code formatting must be applied
+black .
+
+# 4. MANDATORY: Linting must pass
+pylint --rcfile=.github/linters/.python-lint python/example_code/{service}/
+
+# 5. MANDATORY: Documentation must be updated
+cd .tools/readmes
+source .venv/bin/activate
+python -m writeme --languages Python:3 --services {service}
+
+# 6. MANDATORY: ALL EXAMPLE FILES MUST BE EXECUTED TO VALIDATE CREATION
+PYTHONPATH=python:python/example_code/{service} python python/example_code/{service}/{service}_hello.py
+PYTHONPATH=python:python/example_code/{service} python python/example_code/{service}/scenario_{service}_basics.py
+# Test wrapper functions directly
+PYTHONPATH=python:python/example_code/{service} python -c "from {service}_wrapper import {Service}Wrapper; wrapper = {Service}Wrapper.from_client(); print('✅ Wrapper functions working')"
+```
+
+**🚨 CRITICAL**: If ANY of these commands fail, the work is INCOMPLETE and must be fixed.
+
+**🚨 MANDATORY VALIDATION REQUIREMENT**:
+- **ALL generated example files MUST be executed successfully to validate their creation**
+- **Hello examples MUST run without errors and display expected output**
+- **Scenario examples MUST run interactively and complete all phases**
+- **Wrapper classes MUST be importable and instantiable**
+- **Any runtime errors or import failures indicate incomplete implementation**
+
+### Best Practices
+- ✅ **ALWAYS follow the established `{service}_basics.py` or scenario patterns**
+- ✅ **ALWAYS use snake_case naming conventions**
+- ✅ **ALWAYS include proper error handling for AWS service calls**
+- ✅ **ALWAYS use virtual environments for dependency management**
+- ✅ **ALWAYS include type hints where they improve code clarity**
+- ✅ **CRITICAL**: Follow the mandatory workflow sequence exactly
+- ✅ **CRITICAL**: Use complete AWS data structures in all tests
+- ✅ **CRITICAL**: Create service stubber before conftest.py
+- ✅ **CRITICAL**: Include ScenarioData class for complex services
+
+#### Metadata File Pattern
+**MANDATORY for documentation generation:**
+
+**CRITICAL**: Always check the specification file first for metadata requirements:
+- **Specification Location**: `scenarios/basics/{service}/SPECIFICATION.md`
+- **Metadata Section**: Contains exact metadata keys and structure to use
+- **Use Spec Metadata As-Is**: Copy the metadata table from the specification exactly
+
+**Specification Metadata Table Format:**
+```
+## Metadata
+
+|action / scenario |metadata file |metadata key |
+|--- |--- |--- |
+|`ActionName` |{service}_metadata.yaml |{service}_ActionName |
+|`Service Basics Scenario` |{service}_metadata.yaml |{service}_Scenario |
+```
+
+**Implementation Steps:**
+1. **Read Specification**: Always read `scenarios/basics/{service}/SPECIFICATION.md` first
+2. **Extract Metadata Table**: Use the exact metadata keys from the specification
+3. **Create Metadata File**: Create `.doc_gen/metadata/{service}_metadata.yaml`
+4. **Follow Spec Structure**: Use the metadata keys exactly as specified in the table
+
+**Standard Metadata Structure (when no spec exists):**
+```yaml
+# .doc_gen/metadata/{service}_metadata.yaml
+{service}_Hello:
+ title: Hello &{Service};
+ title_abbrev: Hello &{Service};
+ synopsis: get started using &{Service};.
+ category: Hello
+ languages:
+ Python:
+ versions:
+ - sdk_version: 3
+ github: python/example_code/{service}
+ excerpts:
+ - description:
+ snippet_tags:
+ - python.example_code.{service}.Hello
+ services:
+ {service}: {ListOperation}
+```
+
+**Metadata Requirements:**
+- ✅ **ALWAYS** check specification file for metadata requirements FIRST
+- ✅ **ALWAYS** use exact metadata keys from specification table
+- ✅ **ALWAYS** include Hello scenario metadata
+- ✅ **ALWAYS** include all action and scenario metadata as specified
+- ✅ **ALWAYS** use proper snippet tags matching code
+- ✅ **ALWAYS** validate metadata with writeme tool
+
+### Integration with Knowledge Base and Specifications
+
+**MANDATORY Pre-Implementation Workflow:**
+
+1. **Check Specification File FIRST:**
+ - **Location**: `scenarios/basics/{service}/SPECIFICATION.md`
+ - **Extract**: API actions, error handling requirements, metadata table
+ - **Use**: Exact metadata keys and structure from specification
+
+2. **Knowledge Base Consultation:**
+ - Query `coding-standards-KB` for "Python-code-example-standards"
+ - Query `Python-premium-KB` for "Python implementation patterns"
+ - Follow KB-documented patterns for file structure and naming
+
+3. **Implementation Priority:**
+ - **Specification requirements**: Use exact API actions and metadata from spec
+ - **KB patterns**: Follow established coding patterns and structure
+ - **Existing examples**: Validate against existing Python examples only after KB consultation
+
+**Critical Rule**: When specification exists, always use its metadata table exactly as provided. Never create custom metadata keys when specification defines them.
+## 🚨
+COMMON FAILURE PATTERNS TO AVOID 🚨
+
+### ❌ Pattern 1: Skipping Knowledge Base Consultation
+**Mistake**: Starting to code immediately without KB research
+**Result**: Wrong file structure, missing patterns, rejected code
+**Fix**: ALWAYS do ListKnowledgeBases + QueryKnowledgeBases FIRST
+
+### ❌ Pattern 2: Wrong Testing Infrastructure Order
+**Mistake**: Creating conftest.py before service stubber
+**Result**: Import errors, test failures, broken infrastructure
+**Fix**: Create stubber FIRST, then conftest.py
+
+### ❌ Pattern 3: Missing ScenarioData Class
+**Mistake**: Using simple conftest.py for complex services
+**Result**: Difficult test setup, inconsistent patterns
+**Fix**: Always use ScenarioData class for services with scenarios
+
+### ❌ Pattern 4: Incomplete AWS Data Structures
+**Mistake**: Using minimal test data missing required AWS fields
+**Result**: Parameter validation errors, test failures
+**Fix**: Use complete AWS data structures with all required fields
+
+### ❌ Pattern 5: Wrong Test Commands
+**Mistake**: Using `pytest test/` instead of proper markers
+**Result**: Not following established patterns, inconsistent testing
+**Fix**: Use `python -m pytest -m "not integ"` and `python -m pytest -m "integ"`
+
+### ❌ Pattern 6: Skipping Code Quality
+**Mistake**: Not running black, pylint, or writeme
+**Result**: Inconsistent formatting, linting errors, outdated docs
+**Fix**: ALWAYS run all mandatory commands before considering work complete
+
+### ✅ SUCCESS PATTERN: Follow the Exact Sequence
+1. Knowledge base consultation
+2. Service stubber creation
+3. Stubber factory integration
+4. Conftest.py with ScenarioData
+5. Implementation with complete data structures
+6. All mandatory commands execution
+
+**REMEMBER**: These patterns exist because they prevent the exact mistakes made in this session. Follow them exactly.
\ No newline at end of file
diff --git a/.kiro/steering/rust-tech.md b/.kiro/steering/rust-tech.md
new file mode 100644
index 00000000000..90826c4267d
--- /dev/null
+++ b/.kiro/steering/rust-tech.md
@@ -0,0 +1,166 @@
+# Rust Technology Stack & Build System
+
+## Rust SDK v1 Development Environment
+
+### Build Tools & Dependencies
+- **Build System**: Cargo
+- **Testing Framework**: Built-in Rust testing
+- **Package Manager**: Cargo with crates.io
+- **SDK Version**: AWS SDK for Rust v1
+- **Rust Version**: Latest stable Rust
+
+### Common Build Commands
+
+```bash
+# Development
+cargo check # Check compilation without building
+cargo build # Build project
+cargo build --release # Build optimized release version
+
+# Testing
+cargo test # Run all tests
+cargo test --test integration # Run integration tests
+cargo test -- --nocapture # Run tests with output
+
+# Execution
+cargo run --bin hello # Run hello scenario
+cargo run --bin getting-started # Run getting started scenario
+cargo run --bin {scenario-name} # Run specific scenario
+```
+
+### Rust-Specific Pattern Requirements
+
+**CRITICAL**: Rust examples follow a specific directory structure pattern. Always examine existing Rust examples (like EC2) before creating new ones.
+
+#### Correct Structure for Rust Scenarios
+```
+rustv1/examples/{service}/
+├── Cargo.toml
+├── README.md
+├── src/
+│ ├── lib.rs
+│ ├── {service}.rs # Service wrapper
+│ ├── bin/
+│ │ ├── hello.rs # MANDATORY: Hello scenario
+│ │ ├── {action-one}.rs # Individual action file
+│ │ ├── {action-two}.rs # Individual action file, etc.
+│ │ └── {scenario-name}.rs # Other scenario entry points
+│ └── {scenario_name}/
+│ ├── mod.rs
+│ ├── scenario.rs # Main scenario logic
+│ └── tests/
+│ └── mod.rs # Integration tests
+```
+
+#### Key Structural Points
+- **MANDATORY**: Every service must include `src/bin/hello.rs` as the simplest example
+- **Follow the EC2 example structure exactly** - it's the canonical pattern
+- **Service wrapper goes in `src/{service}.rs`** (e.g., `src/comprehend.rs`)
+- **Tests go in `{scenario_name}/tests/mod.rs`** for integration testing
+- **Hello scenario**: Should demonstrate the most basic service operation
+
+#### File Naming Conventions
+- Use snake_case for all Rust files and directories
+- Binary files: `{action}.rs` in `src/bin/` directory
+- Service modules: `{service}.rs` in `src/` directory
+- Scenario modules: `{scenario_name}/mod.rs` and `scenario.rs`
+
+#### Hello Scenario Structure
+- **File location**: `src/bin/hello.rs`
+- **Function structure**: `main()` function as entry point with `tokio::main` attribute
+- **Documentation**: Include module-level documentation explaining the hello example
+
+#### Code Structure Standards
+- **Modules**: Use `mod.rs` files for module organization
+- **Functions**: Use snake_case for function names
+- **Structs/Enums**: Use PascalCase for type names
+- **Constants**: Use UPPER_SNAKE_CASE for constants
+- **Async/Await**: Use `tokio` runtime for async operations
+
+#### Error Handling Patterns
+```rust
+use aws_sdk_s3::{Client, Error};
+use aws_config::meta::region::RegionProviderChain;
+
+#[tokio::main]
+async fn main() -> Result<(), Error> {
+ let region_provider = RegionProviderChain::default_provider().or_else("us-east-1");
+ let config = aws_config::from_env().region(region_provider).load().await;
+ let client = Client::new(&config);
+
+ match client.list_buckets().send().await {
+ Ok(response) => {
+ // Handle successful response
+ println!("Buckets: {:?}", response.buckets());
+ Ok(())
+ }
+ Err(e) => {
+ // Handle error
+ eprintln!("Error: {}", e);
+ Err(e)
+ }
+ }
+}
+```
+
+#### Testing Standards
+- **Integration tests**: Place in `{scenario_name}/tests/mod.rs`
+- **Unit tests**: Include `#[cfg(test)]` modules in source files
+- **Async testing**: Use `#[tokio::test]` for async test functions
+- **Test naming**: Use descriptive function names with snake_case
+
+#### Cargo.toml Configuration
+```toml
+[package]
+name = "{service}-examples"
+version = "0.1.0"
+edition = "2021"
+
+[[bin]]
+name = "hello"
+path = "src/bin/hello.rs"
+
+[[bin]]
+name = "{action-name}"
+path = "src/bin/{action-name}.rs"
+
+[dependencies]
+aws-config = "1.0"
+aws-sdk-{service} = "1.0"
+tokio = { version = "1.0", features = ["full"] }
+tracing-subscriber = "0.3"
+```
+
+#### Documentation Requirements
+- **Module documentation**: Use `//!` for module-level docs
+- **Function documentation**: Use `///` for function documentation
+- **Inline comments**: Explain complex AWS service interactions
+- **README sections**: Include Cargo setup and execution instructions
+
+### Language-Specific Pattern Errors to Avoid
+- ❌ **NEVER assume file naming patterns** without checking existing examples
+- ❌ **NEVER skip the mandatory `src/bin/hello.rs` file**
+- ❌ **NEVER use camelCase for Rust identifiers**
+- ❌ **NEVER ignore proper error handling with Result types**
+- ❌ **NEVER forget to use async/await for AWS operations**
+
+### Best Practices
+- ✅ **ALWAYS examine `rustv1/examples/ec2/` structure first**
+- ✅ **ALWAYS include the mandatory hello scenario**
+- ✅ **ALWAYS use snake_case naming conventions**
+- ✅ **ALWAYS handle errors with Result types and proper error propagation**
+- ✅ **ALWAYS use tokio runtime for async AWS operations**
+- ✅ **ALWAYS follow the established directory structure exactly**
+
+### Cargo Workspace Integration
+- **Workspace member**: Each service example is a workspace member
+- **Shared dependencies**: Common dependencies managed at workspace level
+- **Build optimization**: Shared target directory for faster builds
+
+### Integration with Knowledge Base
+Before creating Rust code examples:
+1. Query `coding-standards-KB` for "Rust-code-example-standards"
+2. Query `Rust-premium-KB` for "Rust implementation patterns"
+3. **CRITICAL**: Always examine existing EC2 example structure as canonical pattern
+4. Follow KB-documented patterns for Cargo project structure and module organization
+5. Validate against existing Rust examples only after KB consultation
\ No newline at end of file
diff --git a/.kiro/steering/structure.md b/.kiro/steering/structure.md
new file mode 100644
index 00000000000..4dee7903485
--- /dev/null
+++ b/.kiro/steering/structure.md
@@ -0,0 +1,135 @@
+# Project Organization & Structure
+
+## Top-Level Directory Structure
+
+### Language-Specific Directories
+Each programming language has its own top-level directory:
+- `python/` - Python 3 examples (Boto3)
+- `javav2/` - Java SDK v2 examples
+- `dotnetv3/` - .NET SDK 3.5+ examples
+- `javascriptv3/` - JavaScript SDK v3 examples
+- `gov2/` - Go SDK v2 examples
+- `kotlin/` - Kotlin examples
+- `php/` - PHP examples
+- `ruby/` - Ruby examples
+- `rustv1/` - Rust SDK v1 examples
+- `swift/` - Swift examples (preview)
+- `cpp/` - C++ examples
+- `sap-abap/` - SAP ABAP examples
+
+### Legacy Directories
+- `java/`, `javascript/`, `go/`, `.dotnet/`, `dotnetv3` - Legacy SDK versions (maintenance mode)
+
+### Shared Resources
+- `applications/` - Cross-service example applications
+- `resources/` - Shared components (CDK, CloudFormation, sample files)
+- `scenarios/` - Multi-service workflow examples
+- `aws-cfn/` - CloudFormation templates
+- `aws-cli/` - CLI examples
+
+### Tooling & Infrastructure
+- `.tools/` - Build tools, validation scripts, README generators
+- `.doc_gen/` - Documentation generation metadata and cross-content
+- `.github/` - GitHub Actions workflows and linters
+- `.kiro/` - Kiro steering rules and configuration
+
+## Within Each Language Directory
+
+### Standard Structure Pattern
+```
+{language}/
+├── README.md # Language-specific setup instructions
+├── Dockerfile # Container image definition
+├── example_code/ # Service-specific examples
+│ ├── {service}/ # AWS service (e.g., s3, dynamodb)
+│ │ ├── README.md # Service examples documentation
+│ │ ├── requirements.txt # Dependencies (Python)
+│ │ ├── pom.xml # Maven config (Java)
+│ │ ├── *.{ext} # Example code files
+│ │ └── test/ # Unit and integration tests
+│ └── ...
+├── cross_service/ # Multi-service applications
+├── usecases/ # Complete use case tutorials (Java)
+└── test_tools/ # Testing utilities (Python)
+```
+
+## Code Organization Conventions
+
+### File Naming
+- Use descriptive names that indicate the AWS service and operation
+- Follow language-specific naming conventions (snake_case for Python, PascalCase for C#)
+- Include service prefix: `s3_list_buckets.py`, `DynamoDBCreateTable.java`
+
+### Code Structure
+- **Single-service actions**: Individual service operations in separate files
+- **Scenarios**: Multi-step workflows within a single service
+- **Cross-service examples**: Applications spanning multiple AWS services
+
+### Testing Structure
+- Unit tests use mocked/stubbed responses (no AWS charges)
+- Integration tests make real AWS API calls (may incur charges)
+- Test files located in `test/` subdirectories
+- Use pytest markers: `@pytest.mark.integ` for integration tests
+
+## Documentation Integration
+
+### Metadata Files
+- `.doc_gen/metadata/{service}_metadata.yaml` - Service-specific documentation metadata
+- Defines code snippets, descriptions, and cross-references
+- Used for AWS Documentation generation
+
+### Snippet Tags
+- Code examples include snippet tags for documentation extraction
+- Format: `# snippet-start:[tag_name]` and `# snippet-end:[tag_name]`
+- Tags referenced in metadata files for automated documentation
+
+### Cross-Content
+- `.doc_gen/cross-content/` - Shared content blocks across services
+- XML files defining reusable documentation components
+
+## Research and Discovery Best Practices
+
+### Service Implementation Research
+Before implementing examples for any AWS service, follow this mandatory sequence:
+1. **AWS Knowledge MCP Server consultation**: Use `search_documentation` and `read_documentation` to understand service fundamentals, API operations, parameters, use cases, and best practices
+2. **Amazon Bedrock Knowledge Base search**: Use `ListKnowledgeBases` and `QueryKnowledgeBases` to find existing service implementations and established patterns within this repository
+3. **Cross-language analysis**: Examine how the service is implemented across different language directories (python/, javav2/, dotnetv3/, etc.)
+4. **Pattern identification**: Look for consistent patterns across languages to understand service-specific requirements and conventions
+
+**CRITICAL REQUIREMENT**: Every AWS service implementation must begin with comprehensive AWS Knowledge MCP Server research and Amazon Bedrock Knowledge Base consultation.
+
+### Language Structure Guidelines
+When working within a specific language directory:
+1. **AWS Knowledge MCP Server for service context**: Use `search_documentation` and `read_documentation` for service-specific implementation considerations and guidance
+2. **Amazon Bedrock Knowledge Base priority**: Use `ListKnowledgeBases` and `QueryKnowledgeBases` to consult both "coding-standards-KB" and "[language]-premium-KB" for language-specific structural guidance and exemplary implementations
+3. **Template selection**: Use the best examples identified in the knowledge bases as templates for new code
+4. **Consistency maintenance**: Follow the established patterns and conventions documented in the knowledge bases for the target language
+5. **Structure validation**: Verify new implementations match the proven structures outlined in knowledge base examples
+
+### Mandatory Knowledge Base Workflow
+**BEFORE ANY CODE CREATION:**
+1. **IMMEDIATELY** execute: `ListKnowledgeBases` and `QueryKnowledgeBases("coding-standards-KB", "[language]-code-example-standards")`
+2. **IMMEDIATELY** execute: `QueryKnowledgeBases("[language]-premium-KB", "[language] implementation patterns")`
+3. **Use AWS Knowledge MCP Server tools**: `search_documentation` and `read_documentation` for service understanding
+4. **WAIT** for KB search results before proceeding
+5. **DOCUMENT** the KB findings in your response
+6. **USE** KB results as the single source of truth for all structural decisions
+
+### Command Execution Standards
+- **Single command execution**: Execute CLI commands individually rather than chaining them together
+- **Step-by-step approach**: Break complex operations into discrete, sequential commands
+- **Error isolation**: Individual command execution allows for better error tracking and resolution
+
+## Configuration Files
+
+### Root Level
+- `.gitignore` - Git ignore patterns for all languages
+- `.flake8` - Python linting configuration
+- `abaplint.json` - SAP ABAP linting rules
+
+### Language-Specific
+- `requirements.txt` - Python dependencies
+- `pom.xml` - Java Maven configuration
+- `package.json` - JavaScript/Node.js dependencies
+- `Cargo.toml` - Rust dependencies
+- `*.csproj`, `*.sln` - .NET project files
\ No newline at end of file
diff --git a/.kiro/steering/tech.md b/.kiro/steering/tech.md
new file mode 100644
index 00000000000..9d6d68e253c
--- /dev/null
+++ b/.kiro/steering/tech.md
@@ -0,0 +1,282 @@
+# Technology Stack & Build System
+
+## Knowledge Base Workflow Enforcement
+
+### CRITICAL EFFICIENCY RULE:
+**After completing mandatory Knowledge Base consultation, NEVER examine existing code files for patterns. The KB consultation is designed to provide all necessary patterns and standards.**
+
+### Post-KB Consultation Behavior:
+- **IMMEDIATELY implement** based on KB findings
+- **TRUST KB results completely** - they contain the authoritative patterns
+- **DO NOT second-guess** by examining existing implementations
+- **REFERENCE KB findings** in your implementation decisions
+
+### Efficiency Violation Examples:
+❌ "Let me check how other services implement this"
+❌ "Let me examine existing patterns"
+❌ "Let me look at a similar service structure"
+
+### Correct Efficiency Examples:
+✅ "Based on my KB consultation, I'll implement using these patterns: [KB findings]"
+✅ "The KB consultation revealed these requirements: [implement directly]"
+
+## Multi-Language Architecture
+This repository supports 12+ programming languages, each with their own build systems and conventions:
+
+**NOTE for validation**: Focus validation efforts on the primary languages (Python, JavaScript, Java, Go, .NET, PHP, Kotlin, C++, Rust). Skip validation for Swift, Ruby, and SAP ABAP as they require specialized environments or have version compatibility issues.
+
+## Language-Specific Guidelines
+
+For detailed language-specific guidelines, refer to the individual steering files:
+
+### Primary Languages & Build Tools (Validation Priority)
+- **Python**: See `python-tech.md` for Python specific guidelines (pip, venv, pytest, black, pylint, flake8)
+- **Java**: See `java-tech.md` for Java v2 specific guidelines (Maven, JUnit 5, Apache Maven Shade Plugin)
+- **JavaScript/Node.js**: See `javascript-tech.md` for JavaScript v3 specific guidelines (npm, Jest, Prettier, Biome)
+- **Rust**: See `rust-tech.md` for Rust v1 specific guidelines (Cargo, cargo test, cargo check)
+- **.NET**: See `dotnet-tech.md` for .NET specific guidelines (dotnet CLI, NuGet, dotnet-format, xUnit)
+- **Go v2**: go mod, go test, go fmt
+- **Kotlin**: Gradle, JUnit
+- **PHP**: Composer, PHPUnit
+- **C++**: CMake, custom build scripts
+
+### Secondary Languages (Skip During Validation)
+- **Ruby**: Bundler, RSpec (version compatibility issues)
+- **Swift**: Swift Package Manager (requires macOS-specific setup)
+- **SAP ABAP**: ABAP development tools (requires specialized SAP environment)
+
+## Development Tools & Standards
+
+### Code Quality
+- **Linting**: Language-specific linters (pylint, ESLint, etc.)
+- **Formatting**: Automated formatters (black, prettier, dotnet-format)
+- **Testing**: Unit and integration test suites for all examples
+- **Type Safety**: Type annotations where supported (Python, TypeScript)
+
+### Docker Support
+- Multi-language Docker images available on Amazon ECR
+- Pre-loaded dependencies and isolated environments
+- Container-based integration testing framework
+
+### Documentation Generation
+- Metadata-driven documentation using `.doc_gen/metadata/*.yaml`
+- Snippet extraction and validation
+- Integration with AWS Documentation pipeline
+- Cross-reference validation system
+
+## Research and Discovery Guidelines
+
+### Understanding AWS Services
+When working with AWS services, follow this mandatory research sequence:
+1. **Use AWS Knowledge MCP Server tools FIRST**: Use `search_documentation` and `read_documentation` to understand service fundamentals, API operations, parameters, and best practices
+2. **Query Amazon Bedrock Knowledge Base**: Use `ListKnowledgeBases` and `QueryKnowledgeBases` to find existing implementation patterns and proven code structures
+3. **Check other SDK implementations**: Look at how other language directories implement the same service for patterns and best practices
+4. **Cross-reference examples**: Compare implementations across languages to identify common patterns and service-specific requirements
+
+**CRITICAL**: Every code example must begin with AWS Knowledge MCP Server consultation and Amazon Bedrock Knowledge Base research before any implementation work begins.
+
+### MANDATORY Tool Usage Sequence
+
+**FIRST - Amazon Bedrock Knowledge Base Search for Standards:**
+```
+ListKnowledgeBases()
+QueryKnowledgeBases("coding-standards-KB", "[language]-code-example-standards")
+```
+- **REQUIRED** before any other research
+- **REQUIRED** before reading any existing files
+- **REQUIRED** before making structural decisions
+
+**SECOND - Amazon Bedrock Knowledge Base Search for Language Patterns:**
+```
+QueryKnowledgeBases("[language]-premium-KB", "[language] implementation patterns")
+```
+- Use for language-specific coding patterns and quality benchmarks
+- Required for understanding language-specific requirements
+
+**THIRD - AWS Service Understanding:**
+```
+search_documentation("What is [AWS Service] and what are its key API operations?")
+read_documentation("https://docs.aws.amazon.com/[service]/latest/[relevant-page]")
+```
+- Use for service fundamentals, API operations, parameters, and best practices
+- Required for understanding service-specific requirements
+
+### Understanding Language Structure
+When working with a specific programming language, follow this MANDATORY sequence:
+
+**STEP 1 - MANDATORY FIRST ACTION**:
+- **IMMEDIATELY** use `ListKnowledgeBases` and `QueryKnowledgeBases` to search "coding-standards-KB" for "[language]-code-example-standards"
+- **DO NOT** read any existing code files until this search is complete
+- **DO NOT** make any assumptions about structure until KB search is done
+
+**STEP 2 - Language Pattern Discovery**:
+- Query "[language]-premium-KB" through `QueryKnowledgeBases` for implementation patterns and quality benchmarks
+
+**STEP 3 - Service Context**:
+- Use AWS Knowledge MCP Server tools for service-specific implementation guidance
+
+**STEP 4 - Pattern Validation**:
+- Use KB search results as the AUTHORITATIVE guide for all structure decisions
+- Only examine existing code files to VERIFY the KB-documented patterns
+- Never use existing code as the primary source of truth
+
+**STEP 5 - Implementation**:
+- Follow KB-documented patterns exactly
+- Use KB examples as templates for new implementations
+
+**CRITICAL FAILURE POINT**: Creating code without first searching the Amazon Bedrock Knowledge Base for language standards will result in incorrect structure.
+
+**MANDATORY WORKFLOW ENFORCEMENT:**
+
+**BEFORE ANY CODE CREATION - REQUIRED FIRST ACTIONS:**
+1. **IMMEDIATELY** execute: `ListKnowledgeBases` and `QueryKnowledgeBases("coding-standards-KB", "[language]-code-example-standards")`
+2. **IMMEDIATELY** execute: `QueryKnowledgeBases("[language]-premium-KB", "[language] implementation patterns")`
+3. **WAIT** for KB search results before proceeding
+4. **DOCUMENT** the KB findings in your response
+5. **USE** KB results as the single source of truth for all structural decisions
+
+**FAILURE TO FOLLOW THIS EXACT SEQUENCE WILL RESULT IN INCORRECT CODE STRUCTURE**
+
+### CLI Command Execution
+- **Single command preference**: Execute one CLI command at a time unless combining commands is explicitly required
+- **Sequential execution**: Break complex operations into individual steps for better error handling and debugging
+- **Clear command separation**: Avoid chaining commands with && or || unless necessary for the specific workflow
+
+## Common Mistakes to Avoid
+
+### Testing
+- **NEVER accept an error as "expected." When you are given code that runs without errors, your result should run and test without errors.**
+
+### Language-Specific Pattern Errors
+- **.YML**: **ALWAYS end a .yml or .yaml file with a single new line character.**
+
+### Pattern Discovery Failures
+- **Root Cause**: Assuming patterns instead of consulting knowledge bases
+- **Solution**: Always use Amazon Bedrock Knowledge Base tools before examining existing code
+- **Verification**: Compare your structure against KB-documented patterns and at least 2 existing examples in the same language
+
+## Service Example Requirements
+
+### Mandatory Hello Scenario
+**CRITICAL**: Every AWS service MUST include a "Hello" scenario as the simplest introduction to the service.
+
+**Hello Scenario Requirements:**
+- **Purpose**: Provide the simplest possible example of using the service
+- **Scope**: Demonstrate basic service connectivity and one fundamental operation
+- **Naming**: Always named "Hello {ServiceName}" (e.g., "Hello S3", "Hello Comprehend")
+- **Implementation**: Should be the most basic, minimal example possible
+- **Documentation**: Must include clear explanation of what the hello example does
+
+**Hello Scenario Examples:**
+- **S3**: List buckets or check if service is accessible
+- **DynamoDB**: List tables or describe service limits
+- **Lambda**: List functions or get account settings
+- **Comprehend**: Detect language in a simple text sample
+- **EC2**: Describe regions or availability zones
+
+**When to Create Hello Scenarios:**
+- **Always required**: Every service implementation must include a hello scenario
+- **First priority**: Create the hello scenario before any complex scenarios
+- **Standalone**: Hello scenarios should work independently of other examples
+- **Minimal dependencies**: Should require minimal setup or configuration
+
+## Development Workflow Requirements
+
+### Pre-Push Validation Pipeline
+When creating or modifying code examples, the following steps must be completed successfully before the work is considered complete:
+
+1. **MANDATORY Amazon Bedrock Knowledge Base Research**: Use `ListKnowledgeBases` and `QueryKnowledgeBases` for both "coding-standards-KB" and "[language]-premium-KB" - REQUIRED FIRST STEP
+2. **MANDATORY AWS Knowledge MCP Server Research**: Use `search_documentation` and `read_documentation` for comprehensive service understanding
+3. **README Generation**: Run `.tools/readmes/writeme.py` to update documentation
+4. **Integration Testing**: Execute integration tests to verify AWS service interactions
+5. **Test Validation**: All tests must pass with zero errors before considering work complete
+6. **Automatic Error Resolution**: If any step fails, automatically fix issues and re-run until success
+
+**MANDATORY COMPLETION REQUIREMENTS**:
+
+**PRE-IMPLEMENTATION CHECKLIST (MUST BE COMPLETED FIRST):**
+- [ ] **KB SEARCH COMPLETED**: Executed `ListKnowledgeBases` and `QueryKnowledgeBases` for both knowledge bases
+- [ ] **KB RESULTS DOCUMENTED**: Included KB findings in response showing evidence of consultation
+- [ ] **AWS KNOWLEDGE MCP SERVER CONSULTED**: Used `search_documentation` and `read_documentation` for service understanding
+- [ ] **PATTERNS IDENTIFIED**: Used KB results to determine file structure, naming, and organization
+
+**IMPLEMENTATION REQUIREMENTS:**
+- **ALL TESTS MUST PASS COMPLETELY**: Both unit tests and integration tests must pass with zero failures before work is considered finished
+- **NO WORK IS COMPLETE WITH FAILING TESTS**: If any test fails, the implementation must be fixed and re-tested until all tests pass
+- **Every service must include a Hello scenario** - no exceptions
+- Hello scenarios must be created first, before any complex scenarios
+
+**FAILURE CONDITIONS:**
+- **FAILURE TO SEARCH AMAZON BEDROCK KNOWLEDGE BASE FIRST WILL RESULT IN REJECTED CODE**
+- **FAILURE TO DOCUMENT KB CONSULTATION WILL RESULT IN REJECTED CODE**
+- **FAILURE TO ENSURE ALL TESTS PASS WILL RESULT IN INCOMPLETE WORK**
+- **FAILURE TO EXECUTE ALL GENERATED EXAMPLE FILES WILL RESULT IN INCOMPLETE VALIDATION**
+
+### 🚨 MANDATORY EXAMPLE FILE EXECUTION VALIDATION
+
+**CRITICAL REQUIREMENT**: All generated example files MUST be executed to validate their creation and functionality.
+
+**MANDATORY EXECUTION CHECKLIST:**
+- [ ] **Hello Examples**: All `{service}_hello` or hello scenario files must run without errors
+- [ ] **Scenario Examples**: All scenario files must run interactively and complete all phases
+- [ ] **Wrapper Classes**: All wrapper classes must be importable and instantiable
+- [ ] **Integration Tests**: All integration tests must pass against real AWS services
+- [ ] **Error-Free Execution**: Any runtime errors, import failures, or execution issues indicate incomplete implementation
+
+**EXECUTION VALIDATION REQUIREMENTS:**
+- ✅ **All examples must run without compilation/interpretation errors**
+- ✅ **All examples must connect to AWS services successfully (or handle credentials gracefully)**
+- ✅ **All examples must display expected output and complete execution**
+- ✅ **Interactive scenarios must accept user input and progress through all phases**
+- ✅ **Any errors during execution indicate incomplete or incorrect implementation**
+
+**COMMON EXECUTION ISSUES TO FIX:**
+- Import/module path errors
+- Missing dependencies
+- Incorrect AWS service client configuration
+- Malformed AWS API calls
+- Missing error handling for credential issues
+- Incomplete scenario logic or user interaction flows
+
+### Test Execution Requirements
+- **Unit Tests**: Must pass completely with no failures or errors
+- **Integration Tests**: Must pass completely (skipped tests are acceptable if properly documented)
+- **Test Coverage**: All major code paths and error conditions must be tested
+- **Error Handling**: All specified error conditions must be properly tested and handled
+- **Real AWS Integration**: Integration tests should use real AWS services where possible
+
+### Testing Standards
+- **Integration-First Approach**: All code examples must use integration tests that make real AWS API calls
+- **No Mocking/Stubbing**: Avoid mocked responses; tests should demonstrate actual AWS service interactions
+- **Real Resource Testing**: Tests should create, modify, and clean up actual AWS resources
+- **Cost Awareness**: Integration tests may incur AWS charges - ensure proper resource cleanup
+- **Cleanup Requirements**: All integration tests must properly clean up created resources
+
+### Update/Write READMEs
+
+Use virtual environment to run the WRITEME tool. Navigate to .tools/readmes and run the below commands:
+
+```
+python -m venv .venv
+
+# Windows
+.venv\Scripts\activate
+python -m pip install -r requirements_freeze.txt
+
+# Linux or MacOS
+source .venv/bin/activate
+
+python -m writeme --languages [language]:[version] --services [service]
+```
+
+### Error Handling Protocol
+- If WRITEME fails: Fix documentation issues, missing metadata, or file structure problems
+- If tests fail: Debug AWS service interactions, fix credential issues, or resolve resource conflicts
+- Re-run both steps until both pass successfully
+- Only then is the example considered complete and ready for commit
+
+## AWS SDK Versions
+- Use latest stable SDK versions for each language
+- Follow SDK-specific best practices and patterns
+- Maintain backward compatibility where possible
+- Credential configuration via AWS credentials file or environment variables
\ No newline at end of file
diff --git a/scenarios/basics/guardduty/SPECIFICATION.md b/scenarios/basics/guardduty/SPECIFICATION.md
new file mode 100644
index 00000000000..2c8c666bbfc
--- /dev/null
+++ b/scenarios/basics/guardduty/SPECIFICATION.md
@@ -0,0 +1,87 @@
+# Amazon GuardDuty Specification
+
+This document contains a draft proposal for a Code Example for *Amazon GuardDuty Basics Scenario*, generated by the Code Examples SpecGen AI tool. The specifications describe a potential code example scenario based on research, usage data, service information, and AI-assistance. The following should be reviewed for accuracy and correctness before proceeding on to a final specification.
+
+### Relevant documentation
+
+* [Getting started with GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_settingup.html)
+* [What is Amazon GuardDuty?](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html)
+* [Amazon GuardDuty API Reference](https://docs.aws.amazon.com/guardduty/latest/APIReference/Welcome.html)
+* [GuardDuty Pricing](https://aws.amazon.com/guardduty/pricing/)
+
+### API Actions Used
+
+* [CreateDetector](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_CreateDetector.html)
+* [GetDetector](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_GetDetector.html)
+* [ListDetectors](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_ListDetectors.html)
+* [CreateSampleFindings](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_CreateSampleFindings.html)
+* [ListFindings](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_ListFindings.html)
+* [GetFindings](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_GetFindings.html)
+* [DeleteDetector](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_DeleteDetector.html)
+
+## Proposed example structure
+
+The details below describe how this example would run for the customer. It includes a Hello service example (included for all services), and the scenario details. The scenario code would also be presented as Action snippets, with a code snippet for each SDK action.
+
+### Hello
+
+The Hello example is a separate runnable example. - Set up the GuardDuty service client - Check if GuardDuty is available in the current region - List any existing detectors
+
+## Scenario
+
+#### Setup
+
+* Create a GuardDuty detector to enable threat detection
+* Verify the detector is successfully created and active
+* Display detector configuration and status
+
+#### Sample Findings Generation
+
+* Generate sample findings to demonstrate GuardDuty capabilities
+* List the generated sample findings
+* Display finding details including severity and type
+
+#### Findings Management
+
+* Retrieve detailed information about specific findings
+* Filter findings by severity level
+* Display finding metadata and threat information
+
+#### Cleanup
+
+* Archive or acknowledge sample findings
+* Optionally disable the detector (with user confirmation)
+* Clean up resources created during the example
+
+## Errors
+
+SDK Code examples include basic exception handling for each action used. The table below describes an appropriate exception which will be handled in the code for each service action.
+
+|Action |Error |Handling |
+|--- |--- |--- |
+|`CreateDetector` |BadRequestException |Validate input parameters and notify user of invalid configuration. |
+|`CreateDetector` |InternalServerErrorException |Retry operation with exponential backoff. |
+|`GetDetector` |BadRequestException |Validate detector ID format and existence. |
+|`GetDetector` |InternalServerErrorException |Retry operation and handle service unavailability. |
+|`ListDetectors` |BadRequestException |Validate pagination parameters and retry. |
+|`ListDetectors` |InternalServerErrorException |Handle service errors gracefully. |
+|`CreateSampleFindings` |BadRequestException |Validate detector ID and finding types. |
+|`CreateSampleFindings` |InternalServerErrorException |Retry sample finding generation. |
+|`ListFindings` |BadRequestException |Validate finding criteria and pagination. |
+|`GetFindings` |BadRequestException |Validate finding IDs format. |
+|`DeleteDetector` |BadRequestException |Confirm detector exists before deletion. |
+|`DeleteDetector` |InternalServerErrorException |Handle deletion failures gracefully. |
+
+## Metadata
+
+|action / scenario |metadata file |metadata key |
+|--- |--- |--- |
+|`CreateDetector` |guardduty_metadata.yaml |guardduty_CreateDetector |
+|`GetDetector` |guardduty_metadata.yaml |guardduty_GetDetector |
+|`ListDetectors` |guardduty_metadata.yaml |guardduty_ListDetectors |
+|`CreateSampleFindings` |guardduty_metadata.yaml |guardduty_CreateSampleFindings |
+|`ListFindings` |guardduty_metadata.yaml |guardduty_ListFindings |
+|`GetFindings` |guardduty_metadata.yaml |guardduty_GetFindings |
+|`DeleteDetector` |guardduty_metadata.yaml |guardduty_DeleteDetector |
+|`Amazon GuardDuty Basics Scenario` |guardduty_metadata.yaml |guardduty_Scenario |
+
diff --git a/scenarios/basics/inspector/SPECIFICATION.md b/scenarios/basics/inspector/SPECIFICATION.md
new file mode 100644
index 00000000000..51eece5c95f
--- /dev/null
+++ b/scenarios/basics/inspector/SPECIFICATION.md
@@ -0,0 +1,89 @@
+# Amazon Inspector Specification
+
+This document contains a draft proposal for an *Amazon Inspector Basics Scenario*, generated by the Code Examples SpecGen AI tool. The specifications describe a potential code example scenario based on research, usage data, service information, and AI-assistance. The following should be reviewed for accuracy and correctness before proceeding on to a final specification.
+
+### Relevant documentation
+
+* [Getting started with Amazon Inspector](https://docs.aws.amazon.com/inspector/latest/user/getting_started.html)
+* [What is Amazon Inspector?](https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html)
+* [Amazon Inspector API Reference](https://docs.aws.amazon.com/inspector/v2/APIReference/Welcome.html)
+* [Amazon Inspector Pricing](https://aws.amazon.com/inspector/pricing/)
+
+### API Actions Used
+
+* [Enable](https://docs.aws.amazon.com/inspector/v2/APIReference/API_Enable.html)
+* [BatchGetAccountStatus](https://docs.aws.amazon.com/inspector/v2/APIReference/API_BatchGetAccountStatus.html)
+* [ListFindings](https://docs.aws.amazon.com/inspector/v2/APIReference/API_ListFindings.html)
+* [BatchGetFindingDetails](https://docs.aws.amazon.com/inspector/v2/APIReference/API_BatchGetFindingDetails.html)
+* [ListCoverage](https://docs.aws.amazon.com/inspector/v2/APIReference/API_ListCoverage.html)
+* [Disable](https://docs.aws.amazon.com/inspector/v2/APIReference/API_Disable.html)
+
+## Proposed example structure
+
+The output below demonstrates how this example would run for the customer. It includes a Hello service example (included for all services), and the scenario description. The scenario code would also be presented as Action snippets, with a code snippet for each SDK action.
+
+### Hello
+
+The Hello example is a separate runnable example. - Set up the Inspector service client - Check the current account status for Inspector - Display available scan types and regions
+
+## Scenario
+
+#### Setup
+
+* Enable Amazon Inspector for the account
+* Verify Inspector is successfully activated
+* Display account status and enabled scan types
+
+#### Coverage Assessment
+
+* List coverage statistics for EC2 instances, ECR repositories, and Lambda functions
+* Display resource coverage details
+* Show scanning status for different resource types
+
+#### Findings Management
+
+* List security findings across all resource types
+* Filter findings by severity level (CRITICAL, HIGH, MEDIUM, LOW)
+* Retrieve detailed information for specific findings
+
+#### Vulnerability Analysis
+
+* Display vulnerability details including CVE information
+* Show affected resources and remediation guidance
+* Filter findings by resource type (EC2, ECR, Lambda)
+
+#### Cleanup
+
+* Optionally disable Inspector scanning (with user confirmation)
+* Display final account status
+
+## Errors
+
+SDK Code examples include basic exception handling for each action used. The table below describes an appropriate exception which will be handled in the code for each service action.
+
+|Action |Error |Handling |
+|--- |--- |--- |
+|`Enable` |ValidationException |Validate resource types and account permissions. |
+|`Enable` |AccessDeniedException |Notify user of insufficient permissions and exit. |
+|`BatchGetAccountStatus` |ValidationException |Validate account IDs format. |
+|`BatchGetAccountStatus` |AccessDeniedException |Handle permission errors gracefully. |
+|`ListFindings` |ValidationException |Validate filter criteria and pagination parameters. |
+|`ListFindings` |InternalServerException |Retry operation with exponential backoff. |
+|`BatchGetFindingDetails` |ValidationException |Validate finding ARNs format. |
+|`BatchGetFindingDetails` |AccessDeniedException |Handle access denied for specific findings. |
+|`ListCoverage` |ValidationException |Validate filter and pagination parameters. |
+|`Disable` |ValidationException |Validate resource types for disabling. |
+|`Disable` |ConflictException |Handle cases where Inspector cannot be disabled. |
+
+## Metadata
+
+|action / scenario |metadata file |metadata key |
+|--- |--- |--- |
+|`Enable` |inspector_metadata.yaml |inspector_Enable |
+|`BatchGetAccountStatus` |inspector_metadata.yaml |inspector_BatchGetAccountStatus |
+|`ListFindings` |inspector_metadata.yaml |inspector_ListFindings |
+|`BatchGetFindingDetails` |inspector_metadata.yaml |inspector_BatchGetFindingDetails |
+|`ListCoverage` |inspector_metadata.yaml |inspector_ListCoverage |
+|`Disable` |inspector_metadata.yaml |inspector_Disable |
+|`Amazon Inspector Basics Scenario` |inspector_metadata.yaml |inspector_Scenario |
+