Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.venv
log-groups-results.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# Log Groups Check

This script identifies CloudWatch log groups with 2 subscription filters and counts log group resource policies across all AWS accounts in your organization. This information is useful during ASEA to LZA upgrade preparation to understand the current state of logging configurations.

The script operates by:
1. Retrieving all active accounts from AWS Organizations
2. Assuming a role in each account across specified regions
3. Calling CloudWatch Logs APIs to:
- Describe all log groups and their subscription filters
- Count log group resource policies using describe_resource_policies
4. Identifying log groups with 2 subscription filters
5. Generating both console output and JSON files with the results

## Prerequisites

### Python Requirements
- Python 3.9 or later
- Virtual environment setup

#### Setting up the Python Environment

1. Create and activate a virtual environment:
```bash
python -m venv .venv
source .venv/bin/activate
```

2. Install required dependencies:
```
pip install -r requirements.txt
```

### AWS Permissions

Required permissions:
- Access to an IAM Role in the ASEA management account
- Permission to list accounts in AWS Organizations
- Ability to assume a role in all AWS accounts containing log groups

Note: While the `ASEA-PipelineRole` satisfies these requirements, it has elevated permissions. We recommend using a least-privilege role with read-only access. See the Sample Policy in the Appendix for the minimum required CloudWatch Logs permissions.

## Usage

Prerequisites:
- Valid credentials for your ASEA management account with Organizations access

Execute the script:
```bash
python log-groups-check.py [options]
```

**WARNING:** For an Organization with a high number of accounts and if checking multiple regions the script can take several minutes to complete.

Configuration options
|Flag|Description|Default|
|----|-----------|-------|
|--accel-prefix|Prefix of your ASEA installation|ASEA|
|--role-to-assume|Role to assume in each account|{accel_prefix}-PipelineRole|
|--regions|List of AWS regions to check (separated by spaces)|ca-central-1|
|--max-workers|Maximum number of parallel workers|10|
|--output-file|Output JSON file path|log-groups-results.json|

The script provides output both in the console and as a JSON file.

## Understanding the Results

### Console Output
The script displays real-time progress as it processes each account-region combination, showing:
- Account name and ID being processed
- Number of log groups found with 2 subscription filters
- Number of log group resource policies found
- Final summary with totals across all accounts

### JSON Output (log-groups-results.json)
The JSON file contains detailed results for each account-region combination with log groups or resource policies:

```json
[
{
"accountId": "123456789012",
"accountName": "Production Account",
"region": "ca-central-1",
"logGroups": [
{
"logGroupName": "/aws/lambda/my-function",
"filters": [
{
"filterName": "filter1",
"destinationArn": "arn:aws:logs:ca-central-1:123456789012:destination:my-destination"
},
{
"filterName": "filter2",
"destinationArn": "arn:aws:kinesis:ca-central-1:123456789012:stream/my-stream"
}
]
}
],
"resourcePoliciesCount": 3
}
]
```

### Key Fields
|Field|Description|
|-----|-----------|
|accountId|AWS account ID|
|accountName|AWS account name from Organizations|
|region|AWS region processed|
|logGroups|Array of log groups with exactly 2 subscription filters|
|resourcePoliciesCount|Total number of log group resource policies in the account-region|




## Appendix - Sample Policy

Sample minimal IAM Policy for CloudWatch Logs access:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchLogsReadOnly",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups",
"logs:DescribeSubscriptionFilters",
"logs:DescribeResourcePolicies"
],
"Resource": "*"
}
]
}
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
#!/usr/bin/env python3
import argparse
import boto3
import json
from concurrent.futures import ThreadPoolExecutor, as_completed
import threading

# Thread-local storage for progress tracking
thread_local = threading.local()


def get_log_group_resource_policies_count(logs_client):
"""Get count of log group resource policies."""
try:
response = logs_client.describe_resource_policies()
return len(response.get('resourcePolicies', []))
except Exception as e:
print(f"Error getting resource policies: {e}")
return 0


def get_log_groups_filters(logs_client):
"""Fetch all log groups and return the subscription filters."""
paginator = logs_client.get_paginator('describe_log_groups')
log_groups_with_two_filters = []

for page in paginator.paginate():
for log_group in page['logGroups']:
log_group_name = log_group['logGroupName']

try:
response = logs_client.describe_subscription_filters(
logGroupName=log_group_name
)

log_groups_with_two_filters.append({
'logGroupName': log_group_name,
'filters': response['subscriptionFilters']
})

except Exception as e:
print(f"Error getting filters for {log_group_name}: {e}")

return log_groups_with_two_filters


def get_active_accounts():
"""Get all active accounts from AWS Organizations."""
print("Fetching active accounts from AWS Organizations...")
org_client = boto3.client('organizations')
paginator = org_client.get_paginator('list_accounts')

active_accounts = []
for page in paginator.paginate():
for account in page['Accounts']:
if account['Status'] == 'ACTIVE':
active_accounts.append({
'Id': account['Id'],
'Name': account['Name']
})

print(f"Found {len(active_accounts)} active accounts")
return active_accounts


def assume_role_and_get_logs_client(account_id, role_name, region):
"""Assume role in target account and return logs client."""
sts_client = boto3.client('sts')

role_arn = f"arn:aws:iam::{account_id}:role/{role_name}"
response = sts_client.assume_role(
RoleArn=role_arn,
RoleSessionName=f"LogGroupsCheck-{account_id}"
)

credentials = response['Credentials']
return boto3.client(
'logs',
region_name=region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)


def process_account(account, role_name, region):
"""Process a single account in a specific region and return results."""
account_id = account['Id']
account_name = account['Name']

print(f"Processing: {account_name} ({account_id}) in {region}")

try:
logs_client = assume_role_and_get_logs_client(account_id, role_name, region)
print(f" {account_name} ({region}): Assumed role successfully, checking log groups...")

log_groups = get_log_groups_filters(logs_client)
resource_policies_count = get_log_group_resource_policies_count(logs_client)

# count log groups that have two subscription filters
log_groups_with_two_filters_count = sum(1 for log_group in log_groups if len(log_group['filters']) == 2)

print(f" {account_name} ({region}): Found {log_groups_with_two_filters_count} log groups with 2 subscription filters")
print(f" {account_name} ({region}): Found {resource_policies_count} log group resource policies")

return {
'accountId': account_id,
'accountName': account_name,
'region': region,
'resourcePoliciesCount': resource_policies_count,
'logGroupsWithTwoFiltersCount': log_groups_with_two_filters_count,
'logGroups': log_groups
}

except Exception as e:
print(f" {account_name} ({region}): Error - {e}")
return None


def main():
parser = argparse.ArgumentParser(
prog='log-groups-check',
usage='%(prog)s [options]',
description='Check for log groups with exactly 2 subscription filters across AWS accounts'
)
parser.add_argument('-r', '--role-to-assume',
help="Role to assume in each account")
parser.add_argument('-p', '--accel-prefix',
default='ASEA', help="Accelerator Prefix")
parser.add_argument('--regions', nargs='+',
default=['ca-central-1'], help="AWS regions to check")
parser.add_argument('--max-workers', type=int, default=10,
help="Maximum number of parallel workers")
parser.add_argument('-o', '--output-file', default='log-groups-results.json',
help="Output JSON file path")

args = parser.parse_args()

role_name = args.role_to_assume if args.role_to_assume else f"{args.accel_prefix}-PipelineRole"
regions = args.regions
max_workers = args.max_workers

accounts = get_active_accounts()
all_results = []

# Create account-region combinations
account_region_pairs = [(account, region) for account in accounts for region in regions]

print(f"\nProcessing {len(accounts)} accounts across {len(regions)} regions ({len(account_region_pairs)} total combinations) with {max_workers} parallel workers...")

with ThreadPoolExecutor(max_workers=max_workers) as executor:
# Submit all account-region processing tasks
future_to_pair = {
executor.submit(process_account, account, role_name, region): (account, region)
for account, region in account_region_pairs
}

# Collect results as they complete
for future in as_completed(future_to_pair):
try:
result = future.result()
except Exception as e:
account, region = future_to_pair[future]
print(f" {account['Name']} ({region}): Failed to process - {e}")
result = None
if result:
all_results.append(result)

# Save results to JSON file
with open(args.output_file, 'w') as f:
json.dump(all_results, f, indent=2)

# Final report
total_log_groups = sum(len(result['logGroups']) for result in all_results)
total_resource_policies = sum(result['resourcePoliciesCount'] for result in all_results)
print("\nProcessing complete!")
print(f"Results saved to: {args.output_file}")
print(f"\nFinal Report: {total_log_groups} log groups across {len(all_results)} account-region combinations")
print(f"Total resource policies: {total_resource_policies}")
print("=" * 80)

for result in all_results:
if result['logGroupsWithTwoFiltersCount'] > 0 or result['resourcePoliciesCount'] > 8:
print(f"\nAccount: {result['accountName']} ({result['accountId']}) - Region: {result['region']}")
print(f"Resource policies: {result['resourcePoliciesCount']}")
print(f"Log Groups with 2 filters: {result['logGroupsWithTwoFiltersCount']}")

for lg in result['logGroups']:
if len(lg['filters']) >= 2:
print(f" • {lg['logGroupName']}")
for i, filter_info in enumerate(lg['filters'], 1):
print(f" Filter {i}: {filter_info['filterName']} -> {filter_info['destinationArn']}")


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
boto3
Original file line number Diff line number Diff line change
@@ -1,7 +1 @@
boto3==1.38.1
botocore==1.38.2
jmespath==1.0.1
python-dateutil==2.9.0.post0
s3transfer==0.12.0
six==1.17.0
urllib3==2.4.0
boto3
12 changes: 12 additions & 0 deletions src/mkdocs/docs/lza-upgrade/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,3 +149,15 @@ ASEA-SecurityResourcesStack-<account>-<region> | CREATE_FAILED | AWS::Clo
Cause: There is a hard limit of 10 CloudWatch Logs resource policies per account. LZA needs to create two.

Workaround: Remove existing CloudWatch Logs resource policies in the problematic account and region to free up sufficient space for LZA. You can use the AWS CLI [describe-resource-policies](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/logs/describe-resource-policies.html) command to list existing resource policies.

## Cannot add accelerator subscription destination (Logging Stage)

The Logging Stage fails with this error: `Message returned: Error: Cloudwatch log group has 2 subscription destinations, can not add accelerator subscription destination!!!!.`

Cause: There is a hard limit of two subscription filters per CloudWatch Log Group, ASEA adds one on each Log Group to centralize logs to the central Logging bucket. During the upgrade the ASEA filter is replaced by a new filter created by LZA. If the ASEA filter is missing on a log group and the Log Group already contains two subscription filter, this will prevent LZA from creating the filter, resulting in the error.

Workaround: One subscription filter needs to be available for ASEA/LZA for the log centralization. Remove one of the custom subscription filters on affected log group. Alternatively you can modify the LZA configuration to [exclude certain log groups](https://awslabs.github.io/landing-zone-accelerator-on-aws/latest/typedocs/interfaces/___packages__aws_accelerator_config_dist_packages__aws_accelerator_config_lib_models_global_config.ICloudWatchLogsConfig.html#exclusions) from the subscription filters and log centralization.

Resolution: Once upgraded to LZA we recommend moving to the [ACCOUNT level subscription filters configuration](https://awslabs.github.io/landing-zone-accelerator-on-aws/latest/typedocs/interfaces/___packages__aws_accelerator_config_dist_packages__aws_accelerator_config_lib_models_global_config.ICloudWatchSubscriptionConfig.html) which will free up the two available log-level subscription filters for your own needs while ensuring log centralization.

Note: A script [log-group-checks.py](https://github.com/aws-samples/aws-secure-environment-accelerator/tree/main/reference-artifacts/Custom-Scripts/lza-upgrade/tools/log-group-checks) is available in the upgrade tools folder to help identify if your landing zone has log groups with 2 subscription filters. Only Log Groups with 2 subscription filters where none of them is the ASEA filter (i.e. `ASEA-LogDestinationOrg`) will cause an issue.