Skip to content

Security and SSL/TLS Certificate Handling#23

Open
zlovtnik wants to merge 5 commits intomainfrom
dev
Open

Security and SSL/TLS Certificate Handling#23
zlovtnik wants to merge 5 commits intomainfrom
dev

Conversation

@zlovtnik
Copy link
Copy Markdown
Owner

@zlovtnik zlovtnik commented Nov 26, 2025

This pull request introduces several improvements focused on security, SSL/TLS certificate handling, and operational clarity for the application's Docker setup and Oracle backend. The most significant changes are the enforcement of a runtime-provided KEYSTORE_PASSWORD for SSL/TLS, automated keystore generation and validation, improved documentation and roadmap tracking, and updates to Docker and compose files for consistency and security.

Security and SSL/TLS Certificate Handling

  • Enforced requirement for KEYSTORE_PASSWORD to be set at runtime (no default value) for SSL/TLS keystore and truststore, improving security compliance. The password is now required for application startup and is not embedded in the image. (docker-compose.yml, Dockerfile, docker-start.sh) [1] [2] [3]
  • Added automated pre-flight validation and generation of truststore.p12 and keystore.p12 from SSL certificates in the startup script, including error handling and permission checks. This ensures keystores are always present and correctly configured before the application runs. (docker-start.sh)
  • Added a certificate verification script (verify-certs.sh) to the Docker image for additional pre-flight checks. (Dockerfile)
  • Updated Docker Compose volume mount for certificates to use /etc/nginx/certs, aligning with the new certificate directory structure. (docker-compose.yml)

Documentation and Roadmap

  • Updated the performance roadmap to reflect the current status of metrics, connection pool monitoring, query streaming, leak detection, and indexing strategies, with detailed notes on what is implemented and what remains. This provides clear guidance for future work and highlights completed backend optimizations. (ROADMAP_PERFORMANCE.md)
  • Added a new ETL & Data Pipeline section to the main roadmap, indicating upcoming features and focus areas. (ROADMAP.md)

Summary by CodeRabbit

Release Notes

  • New Features

    • Multi-level caching system with L1 (in-memory) and L2 (persistent) support for improved performance.
    • Cache warm-up capability with priority-based task execution at application startup.
    • Batch operation metrics collection for visibility into processing throughput and error rates.
  • Infrastructure & Security

    • Enhanced certificate management with automated verification and validation scripts.
    • Improved OpenTelemetry integration with standardized dependency versions.
    • Logstash with per-index lifecycle policies for app, nginx, and audit logs.
  • Documentation

    • New certificate setup and rotation policy guides for deployment across Docker, Kubernetes, and traditional environments.
    • Logstash ILM configuration documentation with per-pattern index management.

✏️ Tip: You can customize this high-level summary in your review settings.

- Add .gitignore patterns to exclude secrets, private keys (.secrets/, *.key, *.pem), and cert files (except *.pub) to prevent accidental commits of sensitive data and improve security.
- Introduce new ETL & Data Pipeline Features roadmap item in ROADMAP.md to plan data integration capabilities.
- Mark performance monitoring infrastructure tasks as implemented in ROADMAP_PERFORMANCE.md, detailing integrations like Prometheus JVM metrics, HikariCP pool metrics, Oracle JDBC optimizations, and custom business telemetry for enhanced observability and optimization tracking.
Introduce comprehensive ETL roadmap for transforming the Spring Boot GraphQL application into an enterprise-grade data pipeline platform, including job orchestration, data transformation DSL, external connectors, validation, CDC, monitoring, and ILM policies. Updated main ROADMAP.md to reference the new document.
- Update .gitignore to exclude secrets, private keys, and specific logstash certs
- Modify Dockerfile to generate self-signed SSL certs in PKCS12 format for Java truststore/keystore
- Pin OpenTelemetry dependencies to v1.40.0 for stability and compatibility
- Adjust docker-compose for required KEYSTORE_PASSWORD without default value for security

This commit enhances security by preventing accidental commits of sensitive data and properly configuring SSL/TLS for the application.
@vercel
Copy link
Copy Markdown

vercel bot commented Nov 26, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
spring-graphql Ready Ready Preview Comment Nov 27, 2025 1:50pm

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Nov 26, 2025

Walkthrough

This PR introduces SSL/TLS certificate infrastructure with generation and verification scripts, restructures Logstash logging with per-type ILM policies and index templates, adds batch operation metrics instrumentation, implements a multi-level cache service (L1 Caffeine + L2 Redis) with warm-up orchestration, and updates OpenTelemetry dependencies to use BOM-managed versions.

Changes

Cohort / File(s) Summary
Certificate & TLS Management
.gitignore, Dockerfile, docker-compose.yml, docker-start.sh, verify-certs.sh, logstash/certs/*
Adds certificate ignore patterns, certificate verification script, keystore generation in Docker startup, and Logstash certificate management with generation script. Removes committed private keys for security.
Logstash & Elasticsearch Configuration
logstash/config/pipelines.conf, logstash/ilm-policies/logs-{app,audit,nginx}-policy.json, logstash/index-templates/logs-{app,audit,nginx}-template.json, logstash/setup-ilm.sh, logstash/certs/generate-certs.sh
Restructures Elasticsearch output to use per-type ILM policies (app, audit, nginx) instead of global policy; adds corresponding index templates, bootstrap scripts, and lifecycle policies with distinct rollover and retention settings.
Batch Processing Metrics
src/main/java/com/rcs/ssf/batch/BatchMetricsCollector.java, BatchOperationsHandler.java, src/test/java/com/rcs/ssf/batch/BatchMetricsCollectorTest.java, BatchOperationsHandlerTest.java
Introduces BatchMetricsCollector for recording batch success/failures, retries, throughput, and error rates; integrates optional metrics injection into BatchOperationsHandler with conditional recording.
Multi-Level Cache Service
src/main/java/com/rcs/ssf/cache/MultiLevelCacheService.java, CacheWarmupService.java, src/main/java/com/rcs/ssf/config/CacheConfig.java, src/test/java/com/rcs/ssf/cache/MultiLevelCacheServiceTest.java, CacheWarmupServiceTest.java
Implements two-tier cache (L1: Caffeine, L2: Redis) with read-through, write-through, and invalidation; adds cache warm-up orchestration with priority-based task execution and startup integration.
Logging & Observability
src/main/resources/logback-logstash-appender.xml, logback-spring.xml, build.gradle, src/main/resources/application.yml
Adds Logstash TCP appender with SSL/TLS and MDC mappings; restructures logback profiles to conditionally include Logstash via docker/production profiles; replaces Spring Boot managed OpenTelemetry with explicit BOM-pinned versions.
Minor Improvements & Documentation
src/main/java/com/rcs/ssf/tracing/LatencyAnalyzer.java, monitoring/elasticsearch/templates/logs-app-template.json, ROADMAP.md, ROADMAP_PERFORMANCE.md, docs/CERTIFICATE_SETUP.md, docs/CERTIFICATE_ROTATION_POLICY.md, logstash/ILM_CONFIGURATION.md, logstash/certs/README.md
Adds nullity checks and iterator-based eviction in LatencyAnalyzer; updates monitoring templates to allow dynamic MDC fields; adds comprehensive TLS/certificate, ILM, and roadmap documentation.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant CWUS as CacheWarmupService
    participant ML as MultiLevelCacheService
    participant L1 as L1 Cache<br/>(Caffeine)
    participant L2 as L2 Cache<br/>(Redis)

    Note over App,L2: Application Startup
    App->>CWUS: ApplicationReadyEvent
    alt async enabled
        CWUS->>CWUS: performWarmupAsync()
    else sync
        CWUS->>CWUS: performWarmup()
    end
    CWUS->>CWUS: Execute by Priority<br/>(1→2→3)
    loop For each registered task
        CWUS->>ML: put(cacheName, key, value, ttl)
        ML->>L1: put to Caffeine
        ML->>L2: put to Redis<br/>with TTL
        ML->>CWUS: Success ✓
        CWUS->>CWUS: Increment metrics
    end
    CWUS->>App: Warmup Complete
Loading
sequenceDiagram
    participant Client
    participant MLCS as MultiLevelCacheService
    participant L1 as L1 Cache<br/>(Caffeine)
    participant L2 as L2 Cache<br/>(Redis)
    participant Supplier as Supplier<T>

    Note over Client,Supplier: Cache Read-Through Flow
    Client->>MLCS: getOrCompute(cacheName, key,<br/>supplier, ttl)
    MLCS->>L1: get(key)
    alt L1 Hit
        L1-->>MLCS: value ✓
        MLCS-->>Client: return value
    else L1 Miss
        MLCS->>L2: get(key)
        alt L2 Hit
            L2-->>MLCS: value ✓
            MLCS->>L1: promote to L1
            MLCS-->>Client: return value
        else L2 Miss
            MLCS->>Supplier: get()
            Supplier-->>MLCS: computed value
            MLCS->>L1: put(value, ttl)
            MLCS->>L2: put(value, ttl)<br/>with Redis
            MLCS-->>Client: return value
        end
    end
Loading
sequenceDiagram
    participant Handler as BatchOperationsHandler
    participant BMC as BatchMetricsCollector<br/>(optional)
    participant Task as Batch Task
    participant DB as Database

    Note over Handler,DB: Batch Execution with Metrics
    Handler->>Task: Execute batch
    Task->>DB: Process items
    DB-->>Task: Results
    alt Success
        Handler->>BMC: recordBatchSuccess(name, count, duration)
        BMC->>BMC: Increment counters<br/>Record timer<br/>Update throughput
    else Partial Success
        Handler->>BMC: recordBatchPartialSuccess(name, ...)
        BMC->>BMC: Record failures<br/>Calculate error rate
    else Retry
        Handler->>BMC: recordRetry(name)
        BMC->>BMC: Increment retries
    end
    BMC-->>Handler: ✓ Metrics recorded
    Handler->>Handler: Continue execution
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring attention:

  • MultiLevelCacheService & CacheWarmupService: High logic density with thread-safety considerations (SCAN-based bulk invalidation, concurrent task registration, phase-based execution with timeouts). Verify concurrent access patterns, TTL synchronization between L1/L2, and exception handling in async paths.
  • BatchMetricsCollector: Complex windowing logic for throughput calculation with atomic state management; verify lock usage, window expiry semantics, and that metrics are properly aggregated across operation types.
  • Elasticsearch ILM restructuring: Logstash pipeline refactoring with per-type policies introduces significant data flow changes; verify index patterns, rollover aliases, and migration path for existing indices.
  • OpenTelemetry dependency migration: BOM-based pinning changes dependency resolution; verify no version conflicts or compatibility issues with Spring Boot 3.5.7 and downstream instrumentations.
  • Certificate generation & verification: Bash scripts with file permissions and keystore operations; verify KEYSTORE_PASSWORD handling, permission enforcement, and error propagation in Docker startup flow.
  • Logback profile conditional inclusion: Verify that all profiles (dev, docker, production) correctly include/exclude Logstash appender and that environment variables are properly resolved.

Possibly related PRs

Poem

🐰 Whiskers twitch with glee,
Certs spring forth from scripts so free,
Cache tiers hum with dual-song,
Metrics dance the whole day long—
Warm-up warmly, logs so bright,
This rabbit's code is dynamite!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 45.30% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'Security and SSL/TLS Certificate Handling' directly and concisely summarizes the main change, which involves enforcing SSL/TLS certificate handling, adding certificate validation, and managing keystores.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dev

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/main/java/com/rcs/ssf/batch/BatchOperationsHandler.java (1)

273-296: Missing metrics recording for individual operations.

executeIndividualOperations doesn't call recordBatchMetrics, so operations below the batch threshold won't be tracked. This creates a blind spot in observability:

     long durationMs = System.currentTimeMillis() - startTimeMs;
     double throughput = durationMs <= 0 ? processedItems * 1000.0 : (processedItems * 1000.0) / durationMs;

     log.info("Individual operations '{}' completed: processed={}, failed={}, throughput={:.2f} items/sec",
         operationName, processedItems, failedItems, throughput);

+    // Record metrics for individual operations
+    recordBatchMetrics(operationName, processedItems, failedItems, durationMs);
+
     return new BatchResult(failedItems == 0, processedItems, failedItems, durationMs);
♻️ Duplicate comments (1)
logstash/config/pipelines.conf (1)

228-250: Same ILM and authentication concerns apply.

The nginx and audit log outputs have the same configuration pattern as the app logs. The verification requests and authentication suggestions from the previous comment apply equally here.

🧹 Nitpick comments (23)
src/main/java/com/rcs/ssf/tracing/LatencyAnalyzer.java (1)

229-241: Iterator-based eviction looks good; consider tweaking the log message (optional)

Using Iterator + remove() is a reasonable way to evict an arbitrary entry once the cap is exceeded and stays consistent with the “arbitrary eviction” comment. One small nit: the log line "Removed oldest latency stat..." is a bit misleading since ConcurrentHashMap iteration order is not chronological; something like "Removed latency stat entry..." would better match the actual behavior.

logstash/setup-ilm.sh (3)

7-10: Strengthen error handling with additional bash options.

The script uses set -e but could benefit from additional safety measures:

-set -e
+set -euo pipefail

 ELASTICSEARCH_HOST="${ES_HOST:-http://elasticsearch:9200}"
-LOGSTASH_CONFIG_DIR="/usr/share/logstash"
+LOGSTASH_CONFIG_DIR="${LOGSTASH_CONFIG_DIR:-/usr/share/logstash}"
  • set -u catches undefined variable references
  • set -o pipefail ensures pipe failures are caught
  • Making LOGSTASH_CONFIG_DIR configurable via environment improves flexibility

15-30: Add curl timeout to prevent indefinite hangs.

The health check function retries appropriately, but individual curl requests could hang indefinitely if Elasticsearch is partially responsive.

-        if curl -s -f "$ELASTICSEARCH_HOST/_cluster/health" > /dev/null 2>&1; then
+        if curl -s -f --max-time 5 "$ELASTICSEARCH_HOST/_cluster/health" > /dev/null 2>&1; then
             echo "✓ Elasticsearch is ready"

The --max-time 5 option ensures each health check attempt times out after 5 seconds.


38-42: Consider adding Elasticsearch authentication.

All curl requests to Elasticsearch lack authentication headers. While this may be acceptable within a trusted Docker network, consider adding authentication for defense in depth:

     curl -X PUT \
+        -u "${ES_USER:-elastic}:${ES_PASSWORD}" \
         -H "Content-Type: application/json" \
         "$ELASTICSEARCH_HOST/_ilm/policy/$policy_name" \

Apply similar changes to all Elasticsearch API calls. This provides:

  • Protection against lateral movement if the container is compromised
  • Alignment with security best practices
  • Audit trail for administrative actions

Based on learnings, the Spring Boot gateway serves on port 8443 with TLS; consider whether Elasticsearch should also enforce authentication even within the stack.

verify-certs.sh (2)

10-10: Consider using or removing SCRIPT_NAME variable.

The SCRIPT_NAME variable is declared but never used. You could either remove it or incorporate it into error messages for better diagnostics.

Apply this diff if you want to remove it:

-SCRIPT_NAME="$(basename "$0")"
 CERT_DIR="${CERT_DIR:-/etc/nginx/certs}"

Or use it in error messages:

     else
-        log_error "File not found: $filepath"
+        log_error "[$SCRIPT_NAME] File not found: $filepath"

65-65: Consider separating variable declarations from command substitutions.

Shellcheck warns that combining local with command substitution can mask return values. While your script has adequate error checking, separating these would follow best practices.

Example for line 65:

-    local actual_mode=$(stat -c %a "$filepath" 2>/dev/null || stat -f %A "$filepath" 2>/dev/null)
+    local actual_mode
+    actual_mode=$(stat -c %a "$filepath" 2>/dev/null || stat -f %A "$filepath" 2>/dev/null)

Apply similar changes to lines 86, 111-112, and 119-120.

Also applies to: 86-86, 111-112, 119-120

docker-start.sh (1)

121-130: Keystore permissions may be too permissive.

Setting chmod 644 makes keystores world-readable. Since keystore.p12 contains private key material, consider chmod 600 (owner read/write only) or chmod 640 (owner read/write, group read) for better security.

 # Verify keystore ownership and permissions
 echo "Verifying keystore permissions..."
-chmod 644 "$TRUSTSTORE_PATH" "$KEYSTORE_PATH" 2>/dev/null || {
+# Truststore (public certs) can be more permissive, keystore (private key) should be restricted
+chmod 644 "$TRUSTSTORE_PATH" 2>/dev/null || {
+    echo "WARNING: Could not set permissions on truststore (may be mounted read-only)"
+}
+chmod 600 "$KEYSTORE_PATH" 2>/dev/null || {
     echo "WARNING: Could not set permissions on keystores (may be mounted read-only)"
 }
src/test/java/com/rcs/ssf/cache/MultiLevelCacheServiceTest.java (3)

29-31: Consider using stricter Mockito settings.

Strictness.LENIENT allows unused stubs and argument mismatches to go undetected. Unless there's a specific reason (e.g., shared setup with conditionally-used stubs), Strictness.STRICT_STUBS (the default) provides better feedback during test development.


53-71: L1 cache hit test could explicitly verify no Redis get was called.

The comment states "Verify Redis was called for put but not for get (L1 hit)" but only verifies the set call. Consider adding verify(valueOperations, never()).get(anyString()) to make the assertion complete.

         // Verify Redis was called for put but not for get (L1 hit)
         verify(valueOperations, times(1)).set(anyString(), any(), any(Duration.class));
+        verify(valueOperations, never()).get(anyString());
     }

158-173: Test verifies Redis interactions but not L1 invalidation.

The testInvalidateAll test mocks Redis behavior and verifies execute and delete calls, but doesn't verify that the L1 (Caffeine) cache was also cleared. Consider adding a get after invalidateAll to confirm L1 entries are gone.

src/test/java/com/rcs/ssf/batch/BatchOperationsHandlerTest.java (1)

38-57: Consider adding metrics assertions for operations.

Since the handler now records metrics when a collector is present, consider adding assertions that verify metrics are recorded after batch execution (e.g., check metricsCollector.getMetrics() or query the SimpleMeterRegistry).

src/main/resources/logback-logstash-appender.xml (1)

26-83: Well-structured Logstash appender with proper security practices.

The SSL/TLS configuration correctly requires KEYSTORE_PASSWORD without a default value, aligning with security compliance. The resilience settings (1s reconnection delay, 5s timeouts) are reasonable for production use.

Consider adding a ringBufferSize or using an AsyncAppender wrapper to prevent blocking on network issues:

<!-- Optional: wrap in AsyncAppender for non-blocking behavior -->
<appender name="ASYNC_LOGSTASH" class="ch.qos.logback.classic.AsyncAppender">
    <queueSize>1024</queueSize>
    <discardingThreshold>0</discardingThreshold>
    <appender-ref ref="LOGSTASH"/>
</appender>
docs/CERTIFICATE_ROTATION_POLICY.md (1)

296-303: Shell portability: date -d is GNU-specific.

The date -d syntax works on GNU/Linux but fails on macOS (BSD date) and Alpine (BusyBox). Consider noting this or providing a more portable alternative using openssl x509 -checkend:

# Check if certificate expires within 30 days (2592000 seconds)
openssl x509 -in /etc/nginx/certs/certificate.pem -checkend 2592000 -noout
src/test/java/com/rcs/ssf/batch/BatchMetricsCollectorTest.java (2)

209-223: Clarify expected null behavior or use assertThrows.

The test catches NPE but treats both throwing and not throwing as acceptable. If null operation names are invalid, the contract should be explicit:

 @Test
 @DisplayName("Should handle null operation type gracefully")
 void testNullOperationType() {
-    // The BatchMetricsCollector uses the operation name as a tag
-    // Null operation names would cause NPE when creating timers
-    // This test verifies the behavior - it may throw NPE which is acceptable
-    // for null input since null operation names are invalid
-    try {
-        metricsCollector.recordBatchSuccess(null, 100, 100);
-        // If it doesn't throw, that's acceptable too
-    } catch (NullPointerException e) {
-        // Expected behavior - null operation names are invalid
-        assertNotNull(e);
-    }
+    // Null operation names are invalid and should throw NPE
+    assertThrows(NullPointerException.class, () ->
+        metricsCollector.recordBatchSuccess(null, 100, 100));
 }

Alternatively, if null should be tolerated, document why and use assertDoesNotThrow.


174-197: Add timeout to prevent test hangs on concurrency issues.

Concurrency tests can hang indefinitely if there's a deadlock. Add a @Timeout annotation:

 @Test
 @DisplayName("Should handle concurrent metric recording")
+@org.junit.jupiter.api.Timeout(10) // Fail if test takes > 10 seconds
 void testConcurrentMetricRecording() throws Exception {
src/test/java/com/rcs/ssf/cache/CacheWarmupServiceTest.java (2)

36-66: Heavy reliance on reflection for test setup.

The test initialization heavily uses ReflectionTestUtils to set private fields and manually create metrics. While this works, it creates tight coupling to implementation details and will break if field names change.

Consider adding package-private or protected test hooks in the production class, or using a test-specific constructor that allows injecting these values. This would make tests more maintainable.


325-348: Consider using ExecutorService for managed thread lifecycle.

Raw Thread instances are started without a managed shutdown. If the test fails partway through, threads may be left running. Using ExecutorService with proper shutdown in a try-finally or @AfterEach ensures cleanup.

+    ExecutorService executor = Executors.newFixedThreadPool(numThreads);
+    try {
         for (int t = 0; t < numThreads; t++) {
             final int threadId = t;
-            new Thread(() -> {
+            executor.submit(() -> {
                 try {
                     startLatch.await();
                     for (int i = 0; i < tasksPerThread; i++) {
                         // ... task registration
                     }
                 } catch (Exception e) {
                     failureCount.incrementAndGet();
                 } finally {
                     completionLatch.countDown();
                 }
-            }).start();
+            });
         }
+    } finally {
+        executor.shutdown();
+    }
src/main/java/com/rcs/ssf/cache/CacheWarmupService.java (2)

80-82: Self-injection pattern is correct but consider adding null safety.

The self-injection for @Async proxy invocation is a valid Spring pattern. However, in edge cases (e.g., non-Spring test contexts), self could be null. Consider adding a null check in onApplicationReady().

     if (asyncWarmup) {
-        // Must call via self (the Spring proxy) to trigger @Async annotation processing
-        self.performWarmupAsync();
+        if (self != null) {
+            self.performWarmupAsync();
+        } else {
+            log.warn("Self-reference not available, falling back to synchronous warmup");
+            performWarmup();
+        }
     } else {

162-190: Executor shutdown is duplicated in catch blocks and finally.

executor.shutdownNow() is called in each catch block AND in the finally block. This is redundant since finally always executes. The catch blocks' shutdownNow() calls can be removed.

         } catch (TimeoutException e) {
             log.error("Cache warm-up timed out after {} seconds", warmupTimeoutSeconds);
             warmupFailures.increment();
-            executor.shutdownNow();
         } catch (InterruptedException e) {
             Thread.currentThread().interrupt();
             log.error("Cache warm-up interrupted");
             warmupFailures.increment();
-            executor.shutdownNow();
         } catch (Exception e) {
             log.error("Cache warm-up failed: {}", e.getMessage(), e);
             warmupFailures.increment();
-            executor.shutdownNow();
         } finally {
-            if (!executor.isShutdown()) {
-                executor.shutdownNow();
-            }
+            executor.shutdownNow();
             sample.stop(warmupTimer);
         }
src/main/java/com/rcs/ssf/batch/BatchMetricsCollector.java (1)

105-110: Consider reducing log level for no-op @PostConstruct.

Since this method is now a no-op retained only for backward compatibility, consider changing the log level from info to debug to reduce noise in production logs.

     @PostConstruct
     public void initializeMetrics() {
         // All metrics are now initialized in constructor
         // This method retained for backward compatibility but is now a no-op
-        log.info("BatchMetricsCollector initialized with throughput window of {}ms", THROUGHPUT_WINDOW_MS);
+        log.debug("BatchMetricsCollector initialized with throughput window of {}ms", THROUGHPUT_WINDOW_MS);
     }
src/main/java/com/rcs/ssf/cache/MultiLevelCacheService.java (3)

91-98: Metrics fields could be null if accessed before @PostConstruct.

The metric counters and timers are initialized in @PostConstruct but accessed in getOrCompute, get, and put. While Spring lifecycle ensures @PostConstruct runs before the bean is available to other beans, consider initializing metrics in the constructor (like BatchMetricsCollector does) for defensive safety.

Move metrics initialization to the constructor after L1 cache initialization:

     public MultiLevelCacheService(RedisTemplate<String, Object> redisTemplate,
                                    MeterRegistry meterRegistry) {
         this.redisTemplate = redisTemplate;
         this.meterRegistry = meterRegistry;

         // Initialize L1 caches
         this.queryResultL1 = buildL1Cache("query_results");
         // ... other caches ...
+        
+        // Initialize metrics eagerly to prevent NPEs
+        initializeMetrics();
     }

-    @PostConstruct
-    public void initializeMetrics() {
+    private void initializeMetrics() {

481-489: Consider using CACHE_NAMES for validation consistency.

The switch expression covers all valid cache names but is separate from CACHE_NAMES constant. Adding a new cache requires updating both. Consider validating against CACHE_NAMES first and using a Map for cache lookup to centralize the cache registry.


368-400: SCAN implementation is correct; batch memory concern is valid but depends on actual cache scale.

The implementation correctly uses non-blocking SCAN with cursor iteration and proper error handling. The memory concern about accumulating keys into a HashSet is theoretically valid but only problematic if a single cache category contains extremely large key sets (>100K+ entries).

Given that:

  • L1 cache is bounded at 500 entries
  • Redis has no explicit maxmemory policy configured (unbounded by default)
  • invalidateAll() is called per cache category separately
  • Typical cache configurations show entries in the 1000-5000 range

The current batch deletion after scanning is acceptable for production. The chunked deletion optimization would be beneficial only for very large cache categories (millions of keys) and can be added as a future enhancement if needed.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2351272 and 2ee2b78.

📒 Files selected for processing (41)
  • .gitignore (1 hunks)
  • Dockerfile (2 hunks)
  • ROADMAP.md (1 hunks)
  • ROADMAP_PERFORMANCE.md (1 hunks)
  • build.gradle (1 hunks)
  • docker-compose.yml (2 hunks)
  • docker-start.sh (1 hunks)
  • docs/CERTIFICATE_ROTATION_POLICY.md (1 hunks)
  • docs/CERTIFICATE_SETUP.md (1 hunks)
  • logstash/ILM_CONFIGURATION.md (1 hunks)
  • logstash/certs/README.md (1 hunks)
  • logstash/certs/ca.key (0 hunks)
  • logstash/certs/ca.srl (0 hunks)
  • logstash/certs/client.key (0 hunks)
  • logstash/certs/client.pkcs8.key (0 hunks)
  • logstash/certs/generate-certs.sh (1 hunks)
  • logstash/certs/server.key (0 hunks)
  • logstash/certs/server.pkcs8.key (0 hunks)
  • logstash/config/pipelines.conf (1 hunks)
  • logstash/ilm-policies/logs-app-policy.json (1 hunks)
  • logstash/ilm-policies/logs-audit-policy.json (1 hunks)
  • logstash/ilm-policies/logs-nginx-policy.json (1 hunks)
  • logstash/index-templates/logs-app-template.json (1 hunks)
  • logstash/index-templates/logs-audit-template.json (1 hunks)
  • logstash/index-templates/logs-nginx-template.json (1 hunks)
  • logstash/setup-ilm.sh (1 hunks)
  • monitoring/elasticsearch/templates/logs-app-template.json (1 hunks)
  • src/main/java/com/rcs/ssf/batch/BatchMetricsCollector.java (1 hunks)
  • src/main/java/com/rcs/ssf/batch/BatchOperationsHandler.java (7 hunks)
  • src/main/java/com/rcs/ssf/cache/CacheWarmupService.java (1 hunks)
  • src/main/java/com/rcs/ssf/cache/MultiLevelCacheService.java (1 hunks)
  • src/main/java/com/rcs/ssf/config/CacheConfig.java (2 hunks)
  • src/main/java/com/rcs/ssf/tracing/LatencyAnalyzer.java (2 hunks)
  • src/main/resources/application.yml (1 hunks)
  • src/main/resources/logback-logstash-appender.xml (1 hunks)
  • src/main/resources/logback-spring.xml (3 hunks)
  • src/test/java/com/rcs/ssf/batch/BatchMetricsCollectorTest.java (1 hunks)
  • src/test/java/com/rcs/ssf/batch/BatchOperationsHandlerTest.java (2 hunks)
  • src/test/java/com/rcs/ssf/cache/CacheWarmupServiceTest.java (1 hunks)
  • src/test/java/com/rcs/ssf/cache/MultiLevelCacheServiceTest.java (1 hunks)
  • verify-certs.sh (1 hunks)
💤 Files with no reviewable changes (6)
  • logstash/certs/ca.srl
  • logstash/certs/client.pkcs8.key
  • logstash/certs/client.key
  • logstash/certs/server.key
  • logstash/certs/server.pkcs8.key
  • logstash/certs/ca.key
🧰 Additional context used
📓 Path-based instructions (5)
{Dockerfile,docker-compose.yml}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Container image builds use ./gradlew bootBuildImage --imageName=ssf-graphql:latest; keep Dockerfile, docker-compose.yml, and env var docs in sync

Files:

  • docker-compose.yml
  • Dockerfile
src/test/java/**/*.java

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use JUnit 5 + Mockito + spring-graphql-test in test files; prefer @GraphQlTest and WebGraphQlTester slices, spinning up Testcontainers only when Oracle behavior is required

Files:

  • src/test/java/com/rcs/ssf/cache/MultiLevelCacheServiceTest.java
  • src/test/java/com/rcs/ssf/batch/BatchMetricsCollectorTest.java
  • src/test/java/com/rcs/ssf/cache/CacheWarmupServiceTest.java
  • src/test/java/com/rcs/ssf/batch/BatchOperationsHandlerTest.java
{README.md,HELP.md,sql/README.md,src/test/java/**/*.java}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Any change to auth, persistence, or migrations mandates doc updates (README.md, HELP.md, sql/README.md) and both happy-path and failure-path tests

Files:

  • src/test/java/com/rcs/ssf/cache/MultiLevelCacheServiceTest.java
  • src/test/java/com/rcs/ssf/batch/BatchMetricsCollectorTest.java
  • src/test/java/com/rcs/ssf/cache/CacheWarmupServiceTest.java
  • src/test/java/com/rcs/ssf/batch/BatchOperationsHandlerTest.java
src/main/java/**/*CacheConfig.java

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Default cache stack uses Caffeine via CacheConfig with Redis secondary; reuse existing cache names and keep persisted query hashes stable for APQ stored in Redis

Files:

  • src/main/java/com/rcs/ssf/config/CacheConfig.java
{nginx.conf,Dockerfile,frontend/ngsw-config.json}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

TLS/port changes must be reflected across nginx.conf, Dockerfile, and frontend/ngsw-config.json so the PWA continues to load cached assets

Files:

  • Dockerfile
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Spring Boot 3.5 GraphQL gateway serves Jetty TLS on 8443, fronts Oracle stored procs, MinIO, and Redis
📚 Learning: 2025-11-25T00:50:41.787Z
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Runtime expects Redis 7+, Oracle, MinIO via `docker-compose.yml` for local stack; HTTPS cert is `src/main/resources/keystore.p12`

Applied to files:

  • docker-compose.yml
  • docs/CERTIFICATE_SETUP.md
  • src/main/resources/logback-spring.xml
  • Dockerfile
  • docker-start.sh
📚 Learning: 2025-11-25T00:50:41.787Z
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Applies to {src/main/java/**/*EnvironmentValidator.java,src/main/java/**/*JwtProperties.java,README.md,HELP.md} : Startup must validate `JWT_SECRET`, `MINIO_ACCESS_KEY`, and `MINIO_SECRET_KEY` via `EnvironmentValidator` and `JwtProperties.validateSecretEntropy`; document any new env var in `README.md` + `HELP.md`

Applied to files:

  • docker-compose.yml
  • docs/CERTIFICATE_ROTATION_POLICY.md
  • docs/CERTIFICATE_SETUP.md
  • docker-start.sh
📚 Learning: 2025-11-25T00:50:41.787Z
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Applies to src/main/java/**/*CacheConfig.java : Default cache stack uses Caffeine via `CacheConfig` with Redis secondary; reuse existing cache names and keep persisted query hashes stable for APQ stored in Redis

Applied to files:

  • src/test/java/com/rcs/ssf/cache/MultiLevelCacheServiceTest.java
  • src/main/java/com/rcs/ssf/cache/CacheWarmupService.java
  • src/main/resources/application.yml
  • src/main/java/com/rcs/ssf/config/CacheConfig.java
  • src/main/java/com/rcs/ssf/cache/MultiLevelCacheService.java
📚 Learning: 2025-11-25T00:50:41.787Z
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Applies to src/main/java/**/*Resilience4jConfig.java : Circuit breakers/retries must be configured in `Resilience4jConfig` for `database`, `redis`, `minio`, `auth-service`, `audit-service` and export metrics through Micrometer

Applied to files:

  • src/main/java/com/rcs/ssf/config/CacheConfig.java
📚 Learning: 2025-11-25T00:50:41.787Z
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Applies to {README.md,HELP.md,sql/README.md,src/test/java/**/*.java} : Any change to auth, persistence, or migrations mandates doc updates (`README.md`, `HELP.md`, `sql/README.md`) and both happy-path and failure-path tests

Applied to files:

  • docs/CERTIFICATE_SETUP.md
📚 Learning: 2025-11-25T00:50:41.787Z
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Applies to {nginx.conf,Dockerfile,frontend/ngsw-config.json} : TLS/port changes must be reflected across `nginx.conf`, Dockerfile, and `frontend/ngsw-config.json` so the PWA continues to load cached assets

Applied to files:

  • docs/CERTIFICATE_SETUP.md
  • Dockerfile
📚 Learning: 2025-11-25T00:50:41.787Z
Learnt from: CR
Repo: zlovtnik/spring-graphql PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T00:50:41.787Z
Learning: Applies to {Dockerfile,docker-compose.yml} : Container image builds use `./gradlew bootBuildImage --imageName=ssf-graphql:latest`; keep Dockerfile, `docker-compose.yml`, and env var docs in sync

Applied to files:

  • Dockerfile
🧬 Code graph analysis (2)
src/main/java/com/rcs/ssf/cache/CacheWarmupService.java (1)
src/main/java/com/rcs/ssf/cache/MultiLevelCacheService.java (1)
  • Service (60-515)
logstash/certs/generate-certs.sh (1)
verify-certs.sh (1)
  • log_info (31-33)
🪛 markdownlint-cli2 (0.18.1)
logstash/ILM_CONFIGURATION.md

14-14: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


22-22: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


302-302: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


310-310: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


315-315: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


321-321: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

docs/CERTIFICATE_SETUP.md

263-263: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


269-269: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

🪛 Shellcheck (0.11.0)
verify-certs.sh

[warning] 10-10: SCRIPT_NAME appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 65-65: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 86-86: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 111-111: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 112-112: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 119-119: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 120-120: Declare and assign separately to avoid masking return values.

(SC2155)

🔇 Additional comments (44)
build.gradle (2)

22-32: Approve BOM-managed dependency strategy with pinned semconv.

The shift to explicit BOMs (OTEL 1.40.0, Instrumentation 2.6.0) with pinned semconv (1.37.0) is a sound dependency management approach. The inline comment correctly explains why semconv must be explicitly managed since it's independently released.


80-88: Clarify the rationale for downgrading OpenTelemetry from 1.49.0 to 1.40.0.

The change intentionally pins OTEL to 1.40.0 instead of using Spring Boot 3.5.7's managed 1.49.0, citing "stability and tested compatibility." However, the comment does not specify what stability or compatibility issues prompted this downgrade.

Please clarify:

  1. What specific stability or compatibility issues exist in OTEL 1.49.0? (e.g., data loss, memory leaks, gRPC failures, Jaeger integration bugs)
  2. Is version 1.40.0 already validated and in use elsewhere in your organization?
  3. Any known deprecations or breaking changes between 1.40.0 and 1.49.0 that could affect future upgrades?

If this is a known issue or a dependency conflict, consider adding a reference to an issue tracker or ADR in the comment for future maintainers.

src/main/java/com/rcs/ssf/tracing/LatencyAnalyzer.java (1)

211-214: Null checks on resolver and layer are a solid defensive addition

Failing fast here with Objects.requireNonNull avoids more obscure NPEs deeper in compute / withLayerLatency and makes the contract explicit.

monitoring/elasticsearch/templates/logs-app-template.json (1)

86-90: Verify MDC cardinality doesn't degrade search performance.

Enabling dynamic: true on the mdc object allows arbitrary MDC keys to be indexed without pre-definition. While this provides flexibility, it could introduce unbounded field cardinality if the application logs high-cardinality values (e.g., user IDs, transaction IDs, request parameters).

The comment is clear and helpful. Ensure your application's MDC usage is controlled and monitored.

logstash/ilm-policies/logs-nginx-policy.json (1)

1-39: ILM policy structure is sound.

The nginx ILM policy follows Elasticsearch best practices with appropriate rollover thresholds and retention periods for infrastructure logs.

logstash/ilm-policies/logs-audit-policy.json (1)

1-42: Audit retention and lifecycle tiers are appropriate for compliance.

The 7-year retention period and searchable snapshot strategy in warm phase align with typical compliance requirements for audit logs.

Ensure the found-snapshots repository is configured in your Elasticsearch cluster before enabling this policy.

logstash/ILM_CONFIGURATION.md (1)

1-50: Comprehensive ILM documentation is clear and well-structured.

The document effectively explains the per-pattern ILM approach, migration strategy, and operational procedures. The before/after pipeline examples and verification commands are particularly helpful for operators.

logstash/index-templates/logs-nginx-template.json (1)

1-51: Template correctly implements nginx-specific ILM setup.

The template properly references the logs-nginx-policy ILM policy and uses appropriate field types for infrastructure logs (e.g., client_ip as IP, bytes as long). Replica and shard configuration is reasonable for this log type.

logstash/ilm-policies/logs-app-policy.json (1)

1-42: App ILM policy structure reflects appropriate log access patterns.

The 50GB rollover threshold and 7-day warm phase transition accommodate typical application log volumes. The 90-day retention aligns with standard observability requirements.

logstash/index-templates/logs-audit-template.json (1)

1-61: Audit template appropriately emphasizes data redundancy and traceability.

The 2-replica configuration and comprehensive audit fields (audit_type, actor_id, previous_state, new_state) support compliance requirements. The decision to exclude jaeger_trace_url from indexing while keeping trace_id indexed is reasonable for query optimization.

logstash/index-templates/logs-app-template.json (1)

1-71: App template balances shard count, observability fields, and pipeline enrichment.

The 3-shard configuration supports high-volume app logs, and the field set properly captures trace correlation (trace_id, span_id) and user identification for observability queries.

logstash/setup-ilm.sh (1)

89-139: Main execution flow is well-structured.

The script follows a logical sequence: health check → policies → templates → bootstrap indices. The summary output with verification commands is helpful for operators.

logstash/config/pipelines.conf (1)

217-226: Based on my comprehensive verification, I have found critical issues with the original review comment. Here's my analysis:

ILM Configuration is Correct and Documented

The Logstash pipeline configuration at lines 217-226 is properly configured:

  • ilm_pattern => "000001" correctly matches the bootstrap index naming convention (logs-app-000001) as documented in logstash/ILM_CONFIGURATION.md
  • ilm_rollover_alias => "logs-app-write" correctly corresponds to the alias created in setup-ilm.sh
  • ilm_policy => "logs-app-policy" correctly references the policy defined in the ILM setup

Critical Issue: Authentication Suggestion is Incorrect

The original review suggests adding authentication credentials:

user => "${ES_USER}"
password => "${ES_PASSWORD}"

However, in docker-compose.yml (line 73-74), Elasticsearch is configured with:

xpack.security.enabled=false

This means Elasticsearch security is explicitly disabled. The setup scripts (setup-ilm.sh and setup-elasticsearch.sh) also use curl without any authentication. Adding credentials when security is disabled would cause connection failures.

Conclusion

The review comment is incorrect because:

  1. The ILM syntax is already correct and matches the documented configuration
  2. The suggested authentication addition would break the setup, not improve it
  3. The verification concern is unnecessary—the configuration is already validated by the project's own documentation

The ILM configuration syntax is correct and requires no changes. The ilm_pattern, ilm_rollover_alias, and ilm_policy values all properly match the bootstrap indices, aliases, and policies defined in the setup scripts and documented in logstash/ILM_CONFIGURATION.md. Do not add authentication—Elasticsearch security is disabled in this environment (xpack.security.enabled=false in docker-compose.yml).

Likely an incorrect or invalid review comment.

logstash/certs/README.md (1)

1-112: LGTM! Comprehensive TLS documentation.

The README provides clear guidance on certificate management, includes prominent security warnings about private keys, and offers practical troubleshooting steps. The documentation aligns well with the certificate generation and verification scripts introduced in this PR.

.gitignore (1)

48-56: LGTM! Comprehensive security patterns.

The ignore patterns correctly prevent committing sensitive TLS materials while preserving public keys (*.pub exclusion). The broad *.key and *.pem patterns provide defense-in-depth, which is appropriate for security-sensitive files.

verify-certs.sh (2)

126-128: Verify intentional failure on certificate expiring in <7 days.

Line 128 returns 1 (failure) for certificates expiring in less than 7 days, even though it's logged as a warning. This causes the check to fail and increments the ERROR counter (since the function returns non-zero).

Is this intentional? If so, consider logging it as an error instead of a warning. If warnings should not fail the script, remove the return 1.

Apply this diff if warnings should not fail:

         elif [ $days_until_expiry -lt 7 ]; then
             log_warning "Certificate expiring very soon ($days_until_expiry days): $filepath"
-            return 1

1-251: Well-structured verification script with comprehensive checks.

The script provides thorough validation of TLS certificates and keystores with clear logging and error accumulation. The logic properly handles missing KEYSTORE_PASSWORD as a warning rather than a failure, allowing flexibility in different deployment scenarios.

logstash/certs/generate-certs.sh (2)

162-187: Verify hardcoded keystore password aligns with runtime security model.

The script uses a hardcoded password changeit for PKCS#12 keystores (lines 168, 176, 183). While the script header clearly states "LOCAL DEVELOPMENT ONLY," ensure this aligns with your security model:

  1. The PR description mentions "runtime-provided KEYSTORE_PASSWORD" for the application
  2. These generated keystores appear to be for local development certificate generation
  3. Production should use proper PKI infrastructure (as documented)

If the application's runtime KEYSTORE_PASSWORD is different from the generation-time password, document how keystores are re-encrypted or regenerated for production environments.

Consider adding a comment in the script:

 # Server keystore
+# NOTE: Generation-time password 'changeit' is for local development only.
+# Production keystores must be generated with secure passwords via proper PKI.
 openssl pkcs12 -export \

1-216: Well-implemented PKI generation script with appropriate security warnings.

The script correctly implements a complete local development PKI with:

  • Strict error handling (set -euo pipefail)
  • Proper file permissions (600 on private keys)
  • Certificate chain validation
  • Clear security warnings about development-only usage

The implementation aligns well with the README documentation and verification script.

ROADMAP_PERFORMANCE.md (1)

1-299: LGTM! Excellent progress tracking and implementation detail.

The roadmap updates provide clear visibility into what's been implemented, partially implemented, or pending. The inclusion of specific file references, licensing notes, and prerequisite dependencies makes this a valuable living document for project tracking.

src/main/resources/application.yml (1)

88-93: LGTM! Cache warm-up configuration is well-defined.

The configuration appropriately enables asynchronous cache warm-up with a reasonable 30-second timeout, preventing startup blocking while ensuring critical caches are populated.

ROADMAP.md (1)

21-21: LGTM! Clear roadmap extension.

The addition of the ETL & Data Pipeline roadmap reference appropriately extends the documentation structure to track new initiatives.

docker-compose.yml (1)

231-231: Volume mount path aligns with docker-start.sh.

The path /etc/nginx/certs is consistent with the certificate paths defined in docker-start.sh (lines 76-79). This ensures the pre-flight validation can locate the TLS assets correctly.

docker-start.sh (2)

82-98: Truststore creation without -nokeys flag may include unexpected content.

When creating a truststore (which should only contain trusted certificates, not keys), consider adding the -nokeys flag to ensure no private key material is accidentally included. However, since no -inkey is provided here, this should be safe.


61-79: Good fail-fast validation for required TLS configuration.

The pre-flight checks ensure the application won't start with missing or misconfigured TLS assets. The path constants are well-defined and consistent with the Docker Compose volume mount.

src/test/java/com/rcs/ssf/cache/MultiLevelCacheServiceTest.java (1)

1-258: Good test coverage for multi-level cache behavior.

The test suite covers the essential paths: L1/L2 hits, misses, compute-on-miss, invalidation, statistics, listeners, graceful degradation on Redis failure, and input validation. This aligns with the coding guidelines for testing cache-related functionality.

src/test/java/com/rcs/ssf/batch/BatchOperationsHandlerTest.java (1)

20-26: Metrics infrastructure properly wired into test setup.

The test now initializes BatchMetricsCollector with a SimpleMeterRegistry and passes it to BatchOperationsHandler, which aligns with the new optional metrics dependency in the handler.

Dockerfile (3)

55-60: Build-time message won't be visible at runtime.

The echo statement on line 59 runs during image build, not container startup. Users won't see this message when running the container. The actual runtime validation happens in docker-start.sh, which is the correct approach.


83-85: The verify-certs.sh script exists in the repository root and is properly structured.

The file is located at the repository root as expected by the Dockerfile COPY command on line 84. It is a 251-line bash script with proper structure, including shebang, function definitions for logging and certificate checks, and error handling. The subsequent RUN chmod +x command on line 85 will correctly make it executable in the image.


53-60: TLS path consistency verified across configuration files.

Verification confirms the review comment is accurate:

  1. Keystores generated at runtime: docker-start.sh creates truststore.p12 and keystore.p12 in /etc/nginx/certs/ only when KEYSTORE_PASSWORD is provided.

  2. Certificates generated at build time: Dockerfile creates /etc/nginx/certs/ and generates certificate.pem and private-key.pem at build time.

  3. TLS paths are consistent:

    • nginx.conf uses: /etc/nginx/certs/certificate.pem and /etc/nginx/certs/private-key.pem
    • Dockerfile creates: /etc/nginx/certs/ directory with those files
    • docker-start.sh generates keystores in the same directory
  4. Port consistency: nginx.conf listens on ports 80 and 443, which align with docker-compose.yml port mappings (80:80, 443:443).

Note: frontend/ngsw-config.json is a service worker asset cache manifest and contains no port or TLS configuration to verify.

The separation of build-time certificate generation from runtime keystore generation is the correct security approach—no secrets are embedded in the image.

src/main/resources/logback-logstash-appender.xml (1)

31-34: Good observability MDC field coverage.

Including span_id alongside trace_id enables full distributed tracing correlation. Note that the JSON_FILE appender in logback-spring.xml doesn't include span_id — consider adding it there for consistency if local JSON logs are used for debugging.

docs/CERTIFICATE_ROTATION_POLICY.md (1)

1-6: Comprehensive certificate lifecycle documentation.

Excellent coverage of rotation procedures, incident response, and monitoring. The 90-day production rotation aligns with industry best practices. The troubleshooting table (lines 263-269) will be particularly useful for operators.

src/test/java/com/rcs/ssf/batch/BatchMetricsCollectorTest.java (1)

22-27: LGTM!

Clean test setup using SimpleMeterRegistry for isolation. Explicit initializeMetrics() call mirrors the @PostConstruct lifecycle in production.

docs/CERTIFICATE_SETUP.md (2)

80-122: Well-documented Kubernetes deployment pattern.

The secret creation and pod manifest examples correctly separate the password (as literal) from certificate files, with proper volume mount configuration.


1-10: Comprehensive SSL/TLS setup documentation.

Good coverage of deployment scenarios (Docker, Kubernetes, VM) with practical examples. The troubleshooting table aligns well with the rotation policy document.

src/main/resources/logback-spring.xml (1)

91-108: Clean profile-based Logstash integration.

Good separation: the docker profile includes Logstash with console output for debugging, while production omits console. The explicit ENVIRONMENT property override ensures correct environment tagging in logs.

src/main/java/com/rcs/ssf/batch/BatchOperationsHandler.java (2)

52-56: Good optional dependency pattern.

Using @Autowired(required = false) with constructor injection allows the handler to function without metrics infrastructure while enabling observability when available.


310-337: LGTM!

Clean helper methods with consistent null-checking pattern. The separation of success vs. partial success metrics provides good granularity for observability.

src/test/java/com/rcs/ssf/cache/CacheWarmupServiceTest.java (1)

1-27: LGTM! Comprehensive test coverage.

The test suite covers priority validation, execution ordering, metrics tracking, failure handling, concurrent registration, and async warmup via proxy. Good use of JUnit 5 + Mockito as per coding guidelines.

src/main/java/com/rcs/ssf/cache/CacheWarmupService.java (1)

55-57: Well-designed cache warmup service with phased execution.

The service implements a robust warmup strategy with priority-based phases, timeout enforcement, and comprehensive metrics. The thread-safe task registry using ConcurrentHashMap with CopyOnWriteArrayList is appropriate for concurrent registration during startup.

src/main/java/com/rcs/ssf/batch/BatchMetricsCollector.java (2)

216-244: LGTM! Correct double-checked locking for window expiry.

The throughput window update correctly uses double-checked locking with a ReentrantLock to prevent lost increments during concurrent window expiry. The re-read of windowStartTime inside the lock guards against the ABA problem as documented.


45-57: Well-structured metrics collector with eager initialization.

Eagerly initializing counters in the constructor prevents NPEs during early bean construction. The separation between global counters and per-operation timers/counters via ConcurrentHashMap is appropriate.

src/main/java/com/rcs/ssf/cache/MultiLevelCacheService.java (2)

305-311: Good defensive TTL adjustment to prevent stale L1 data.

The check ensures L2 TTL is never shorter than L1 TTL, preventing the scenario where L2 expires but L1 still serves stale data. The warning log helps identify misconfigured TTLs.


60-68: Well-designed multi-level cache with comprehensive observability.

The service correctly implements L1/L2 cache tiering with Caffeine and Redis, including read-through semantics, on-write invalidation, and graceful degradation when Redis is unavailable. The invalidation listener pattern provides good extensibility. Based on learnings, this aligns with the project's cache stack using Caffeine with Redis secondary.

Comment thread docker-compose.yml
Comment on lines +208 to +212
# KEYSTORE_PASSWORD: Required for SSL/TLS security. Must be set explicitly.
# Used for truststore.p12 and keystore.p12 password. NO default value provided for security compliance.
# Set to a strong, unique value (e.g., via secrets manager or environment setup).
# Example: KEYSTORE_PASSWORD=your-strong-unique-password-here
KEYSTORE_PASSWORD: ${KEYSTORE_PASSWORD}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Script executed:

#!/bin/bash
# Verify KEYSTORE_PASSWORD documentation
echo "=== Checking README.md for KEYSTORE_PASSWORD ==="
if [ -f README.md ]; then
    rg -n "KEYSTORE_PASSWORD" README.md || echo "Not found in README.md"
else
    echo "README.md not found"
fi

echo ""
echo "=== Checking HELP.md for KEYSTORE_PASSWORD ==="
if [ -f HELP.md ]; then
    rg -n "KEYSTORE_PASSWORD" HELP.md || echo "Not found in HELP.md"
else
    echo "HELP.md not found"
fi

echo ""
echo "=== Checking docker-compose.yml for KEYSTORE_PASSWORD context ==="
rg -n -A2 -B2 "KEYSTORE_PASSWORD" docker-compose.yml

Repository: zlovtnik/spring-graphql


<shell_result>
Repository: zlovtnik/spring-graphql
Exit code: 0

stdout:

=== Checking README.md for KEYSTORE_PASSWORD ===
175:export KEYSTORE_PASSWORD=changeit

=== Checking HELP.md for KEYSTORE_PASSWORD ===
53:export KEYSTORE_PASSWORD=changeit

=== Checking docker-compose.yml for KEYSTORE_PASSWORD context ===
206-      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-o-cara-esta-pedindo-uma-chave-secreta}
207-      JWT_SECRET: ${JWT_SECRET:-0123456789abcdefghijklmnopqrstuv0123456789abcdefghijklmnopqrstuv}
208:      # KEYSTORE_PASSWORD: Required for SSL/TLS security. Must be set explicitly.
209-      # Used for truststore.p12 and keystore.p12 password. NO default value provided for security compliance.
210-      # Set to a strong, unique value (e.g., via secrets manager or environment setup).
211:      # Example: KEYSTORE_PASSWORD=your-strong-unique-password-here
212:      KEYSTORE_PASSWORD: ${KEYSTORE_PASSWORD}
213-      REDIS_HOST: redis
214-      REDIS_PORT: 6379

</shell_result>


Documentation for KEYSTORE_PASSWORD exists but contradicts the new security requirement.

The environment variable is documented in README.md (line 175) and HELP.md (line 53), but both files still show export KEYSTORE_PASSWORD=changeit with a default value. This conflicts with the docker-compose.yml change (lines 208-212) which now requires KEYSTORE_PASSWORD to be set explicitly with no default for security compliance.

Per the coding guidelines, Dockerfile, docker-compose.yml, and env var documentation must stay in sync. README.md and HELP.md need to be updated to reflect that KEYSTORE_PASSWORD has no default value and must be explicitly provided.

🤖 Prompt for AI Agents
In docker-compose.yml around lines 208-212 and docs README.md (line 175) and
HELP.md (line 53), the docs still show a default KEYSTORE_PASSWORD=changeit
which conflicts with docker-compose.yml that requires KEYSTORE_PASSWORD be set
explicitly with no default; update README.md and HELP.md to remove the default
example, state that KEYSTORE_PASSWORD has no default and must be explicitly
provided (recommend using a secrets manager or environment setup), replace the
example line with something like "export
KEYSTORE_PASSWORD=<your-strong-unique-password>" or instructions for mounting a
secret, and ensure both files mirror the exact wording and examples so
documentation and docker-compose.yml remain in sync.

Comment thread docker-start.sh
Comment on lines +85 to +92
openssl pkcs12 -export \
-in "$CERT_PATH" \
-out "$TRUSTSTORE_PATH" \
-name "logstash-cert" \
-passout pass:"$KEYSTORE_PASSWORD" 2>/dev/null || {
echo "ERROR: Failed to create truststore.p12"
exit 1
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Password exposure risk via command-line arguments.

Passing KEYSTORE_PASSWORD via pass:$KEYSTORE_PASSWORD can expose the password in process listings (e.g., ps aux). Consider using pass:stdin and piping the password, or using a temporary file with restricted permissions.

-        openssl pkcs12 -export \
-            -in "$CERT_PATH" \
-            -out "$TRUSTSTORE_PATH" \
-            -name "logstash-cert" \
-            -passout pass:"$KEYSTORE_PASSWORD" 2>/dev/null || {
+        echo "$KEYSTORE_PASSWORD" | openssl pkcs12 -export \
+            -in "$CERT_PATH" \
+            -out "$TRUSTSTORE_PATH" \
+            -name "logstash-cert" \
+            -passout stdin 2>/dev/null || {

Also applies to: 105-113

🤖 Prompt for AI Agents
In docker-start.sh around lines 85-92 (and also apply the same change to lines
105-113), the script currently passes KEYSTORE_PASSWORD on the openssl command
line which can expose it via process listings; change the invocation to supply
the password via stdin or a temporary file with strict permissions instead of
using pass:$KEYSTORE_PASSWORD. Specifically, pipe the password into openssl
using pass:stdin (e.g., printf or echo to the command's stdin) or write the
password to a temp file created with mktemp and chmod 600, reference that file
for the pass argument, and ensure you securely delete the temp file after use;
update both occurrences and add brief error handling for the piping/temp-file
flow.

Comment on lines +88 to +105
openssl pkcs12 -export \
-in /etc/nginx/certs/certificate-new.pem \
-out /etc/nginx/certs/truststore-new.p12 \
-name "logstash-cert" \
-passout pass:"${KEYSTORE_PASSWORD}"

# Create new keystore
openssl pkcs12 -export \
-in /etc/nginx/certs/certificate-new.pem \
-inkey /etc/nginx/certs/private-key-new.pem \
-out /etc/nginx/certs/keystore-new.p12 \
-name "logstash-key" \
-passout pass:"${KEYSTORE_PASSWORD}"

# Verify keystores
openssl pkcs12 -info -in /etc/nginx/certs/truststore-new.p12 -passin pass:"${KEYSTORE_PASSWORD}" -noout
openssl pkcs12 -info -in /etc/nginx/certs/keystore-new.p12 -passin pass:"${KEYSTORE_PASSWORD}" -noout
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Security: Avoid exposing passwords in process arguments.

Using -passout pass:"${KEYSTORE_PASSWORD}" exposes the password in the process list (visible via ps). Consider using file-based or stdin-based password input:

-  -passout pass:"${KEYSTORE_PASSWORD}"
+  -passout file:<(echo "$KEYSTORE_PASSWORD")

Or use environment variable directly (supported in OpenSSL 1.1.1+):

-passout env:KEYSTORE_PASSWORD

This applies to lines 92, 100, 103, 104, 194, and 252-253 throughout the document.

🤖 Prompt for AI Agents
In docs/CERTIFICATE_ROTATION_POLICY.md around lines 88 to 105 (and also update
occurrences at lines ~92, 100, 103, 104, 194, 252-253), the pkcs12 commands pass
passwords on the command line using -passout pass:"${KEYSTORE_PASSWORD}" which
exposes secrets in the process list; change these to a safe method such as using
-passout env:KEYSTORE_PASSWORD (requires the variable set in the environment) or
-passout file:/path/to/password-file or using stdin/fd methods so the password
is not visible in process arguments, and update all listed occurrences
consistently and document the required environment/file setup.

Comment thread docs/CERTIFICATE_SETUP.md
Comment on lines +62 to +64
```bash
docker-compose up -e KEYSTORE_PASSWORD="MySecurePassword123!@#"
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Incorrect docker-compose syntax.

docker-compose up -e is not a valid flag. Environment variables should be set before the command or via .env file:

-docker-compose up -e KEYSTORE_PASSWORD="MySecurePassword123!@#"
+KEYSTORE_PASSWORD="MySecurePassword123!@#" docker-compose up

Or use the .env file approach shown later in the document.

🤖 Prompt for AI Agents
In docs/CERTIFICATE_SETUP.md around lines 62–64, the example uses an invalid
docker-compose flag (`docker-compose up -e ...`). Replace that line with
instructions to set the KEYSTORE_PASSWORD environment variable before running
docker-compose, or reference using a .env file or docker-compose's --env-file/
environment settings in the compose YAML; update the text to show the env
variable is provided via the shell (export or prepended) or via an env file
rather than using `-e` with `docker-compose up`.

Comment on lines +252 to 259
# Fallback for any unmapped log types
else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[@metadata][target_index]}"
ilm_enabled => false
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Set a default index for the fallback output.

The fallback output references [@metadata][target_index], which is only set in type-specific filter blocks (lines 159, 174, 199). If a log arrives with an unmapped type, this metadata field will be unset, potentially causing the index interpolation to fail.

Consider adding a default index pattern:

   else {
     elasticsearch {
       hosts => ["elasticsearch:9200"]
-      index => "%{[@metadata][target_index]}"
+      index => "%{[@metadata][target_index]:-logs-unmapped-%{+YYYY.MM.dd}}"
       ilm_enabled => false
     }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Fallback for any unmapped log types
else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[@metadata][target_index]}"
ilm_enabled => false
}
}
# Fallback for any unmapped log types
else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[@metadata][target_index]:-logs-unmapped-%{+YYYY.MM.dd}}"
ilm_enabled => false
}
}
🤖 Prompt for AI Agents
In logstash/config/pipelines.conf around lines 252 to 259 the fallback
elasticsearch output uses "%{[@metadata][target_index]}", but that metadata is
only set in type-specific filters (lines ~159/174/199), so unmapped types can
produce an unset interpolation; update the fallback to use a safe default index
pattern (for example something like "logs-%{+YYYY.MM.dd}" or a namespaced
default) by changing the index setting to fall back when @metadata.target_index
is missing (use Logstash's sprintf with a literal default or set
@metadata.target_index earlier for unmatched types) ensuring the output always
has a valid index value.

Comment on lines +224 to +232
try {
executeTask(task);
loadedCount++;
warmupEntries.increment();
} catch (Exception e) {
log.warn("Warm-up task failed: cache={}, key={}, error={}",
cacheName, task.key(), e.getMessage());
warmupFailures.increment();
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Metrics double-counting for null values.

When a task returns null, executeTask increments warmupNulls (line 258), but the caller executeWarmupPhase still increments warmupEntries (line 227) and loadedCount (line 226). This double-counts null results as both "entries loaded" and "nulls skipped".

                     try {
-                        executeTask(task);
-                        loadedCount++;
-                        warmupEntries.increment();
+                        if (executeTask(task)) {
+                            loadedCount++;
+                            warmupEntries.increment();
+                        }
                     } catch (Exception e) {

Then modify executeTask to return a boolean indicating success:

-    private <T> void executeTask(WarmupTask<T> task) {
+    private <T> boolean executeTask(WarmupTask<T> task) {
         // ...
         T value = task.valueSupplier().get();
         if (value != null) {
             cacheService.put(task.cacheName(), task.key(), value, task.ttl());
+            return true;
         } else {
             log.debug("Cache warm-up skipped due to null value: ...");
             warmupNulls.increment();
+            return false;
         }
     }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/main/java/com/rcs/ssf/cache/CacheWarmupService.java around lines 224–232,
change the flow so null results aren't double-counted: modify executeTask(...)
to return a boolean success flag (true when an entry was actually loaded, false
when the task returned null or skipped), keep the warmupNulls increment inside
executeTask when it detects a null, and in executeWarmupPhase only increment
loadedCount and warmupEntries when executeTask returns true; preserve the
existing exception handling (catch block still increments warmupFailures) and
adjust call sites to use the boolean return value.

Comment on lines 106 to 129
@Bean
@Primary
public CacheManager caffeineCacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();

// Since CaffeineCacheManager applies uniform TTL across all caches via Caffeine.newBuilder(),
// we create a multi-cache approach by registering cache names individually.
// However, CaffeineCacheManager.setCaffeine() sets a shared Caffeine spec for all caches.
// To apply per-cache TTLs in Spring Cache context, we use a default configuration that
// matches the shortest L1 TTL and allow MultiLevelCacheService to enforce proper TTLs
// when direct cache coordination is needed.

// Default L1 TTL: set to the minimum to be conservative (10 min for query_results)
cacheManager.setCaffeine(Caffeine.newBuilder()
.maximumSize(500)
.expireAfterWrite(Duration.ofMinutes(10))
.maximumSize(L1_MAX_SIZE)
.expireAfterWrite(QUERY_RESULTS_TTL) // Use shortest L1 TTL as default
.recordStats());

// Register all cache names for eager initialization
cacheManager.setCacheNames(java.util.List.of(
QUERY_RESULTS, SESSIONS, USERS, METADATA, PERSISTED_QUERIES));

return cacheManager;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: L1 cache applies uniform 10-minute TTL to all caches despite per-cache TTL documentation.

The CaffeineCacheManager applies a single shared Caffeine spec to all caches. Line 121 sets expireAfterWrite(QUERY_RESULTS_TTL) (10 minutes) as the TTL for all registered caches, but the documentation at lines 89-95 and the class javadoc claim per-cache TTLs (sessions: 60 min, persistedQueries: 60 min, metadata: 30 min, etc.).

This causes:

  • Session cache: Expires after 10 minutes instead of the intended 60 minutes aligned with token lifetime (line 76), forcing unnecessary re-authentication
  • Persisted queries (APQ): Expires after 10 minutes instead of 60 minutes, violating the guideline to keep APQ hashes stable for Redis storage
  • Other caches: Expire faster than documented, defeating the purpose of differentiated TTLs

CaffeineCacheManager does not support per-cache TTL configuration natively. You must either:

  1. Use separate CaffeineCacheManager instances (one per TTL group), or
  2. Build individual CaffeineCache instances with custom Caffeine builders and register them manually, or
  3. Rely entirely on MultiLevelCacheService for per-cache TTL enforcement and update documentation to reflect that L1 uses a uniform conservative TTL

Based on coding guidelines emphasizing APQ hash stability and session token alignment.

🤖 Prompt for AI Agents
In src/main/java/com/rcs/ssf/config/CacheConfig.java around lines 106 to 129,
the current CaffeineCacheManager sets a single shared Caffeine spec
(expireAfterWrite(QUERY_RESULTS_TTL)) which applies a uniform 10-minute TTL to
all caches; replace this with explicit per-cache caches: create per-cache
Caffeine builders with the correct expireAfterWrite values (sessions=60min,
persistedQueries=60min, metadata=30min, query_results=10min, etc.), wrap each
builder in a CaffeineCache instance named for its cache, and register those
caches in a SimpleCacheManager (or multiple CaffeineCacheManager instances
grouped by TTL) returned as the @Primary CacheManager; remove the shared
setCaffeine() and setCacheNames() usage and ensure the bean exposes the same
name/type so other components keep working.

Comment thread src/main/java/com/rcs/ssf/config/CacheConfig.java
Comment on lines +186 to +190
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.activateDefaultTyping(
objectMapper.getPolymorphicTypeValidator(),
ObjectMapper.DefaultTyping.NON_FINAL,
JsonTypeInfo.As.PROPERTY);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Major security risk: activateDefaultTyping enables polymorphic deserialization vulnerabilities.

Jackson's activateDefaultTyping with DefaultTyping.NON_FINAL is a well-known deserialization security risk. It allows attackers to specify arbitrary classes in JSON payloads, potentially leading to remote code execution through gadget chains.

Remove activateDefaultTyping and use explicit type handling via @JsonTypeInfo annotations on your domain classes, or use allowlist-based PolymorphicTypeValidator if polymorphism is absolutely required.

Apply this diff to remove the unsafe type configuration:

     @Bean
     public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory connectionFactory) {
         ObjectMapper objectMapper = new ObjectMapper();
-        objectMapper.activateDefaultTyping(
-                objectMapper.getPolymorphicTypeValidator(),
-                ObjectMapper.DefaultTyping.NON_FINAL,
-                JsonTypeInfo.As.PROPERTY);
+        objectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
         
         RedisTemplate<String, Object> template = new RedisTemplate<>();
         template.setConnectionFactory(connectionFactory);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.activateDefaultTyping(
objectMapper.getPolymorphicTypeValidator(),
ObjectMapper.DefaultTyping.NON_FINAL,
JsonTypeInfo.As.PROPERTY);
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
🤖 Prompt for AI Agents
In src/main/java/com/rcs/ssf/config/CacheConfig.java around lines 186-190,
remove the call to objectMapper.activateDefaultTyping(...) because
DefaultTyping.NON_FINAL enables unsafe polymorphic deserialization; instead
construct the ObjectMapper without activating default typing, register any
needed modules (e.g., JavaTimeModule), and handle polymorphism explicitly —
either annotate domain classes with @JsonTypeInfo/@JsonSubTypes or, if
polymorphism is absolutely required at runtime, replace the call with a safe
PolymorphicTypeValidator that allowlists specific base packages/classes (e.g.,
use BasicPolymorphicTypeValidator.builder().allowIfSubType(...).build()) and
pass that validator to activateDefaultTyping; ensure no open DefaultTyping usage
remains.

Comment on lines +15 to +16
<!-- Environment for structured logging (overridden per profile) -->
<springProperty scope="context" name="ENVIRONMENT" source="spring.profiles.active" defaultValue="unknown"/>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent ENVIRONMENT default values.

Line 16 sets defaultValue="unknown", but line 23 uses ${ENVIRONMENT:-dev}. This inconsistency means if spring.profiles.active is unset, the encoder's customFields will use "dev" while other references might expect "unknown":

-            <customFields>{"service":"${APPLICATION_NAME}","environment":"${ENVIRONMENT:-dev}"}</customFields>
+            <customFields>{"service":"${APPLICATION_NAME}","environment":"${ENVIRONMENT:-unknown}"}</customFields>

Also applies to: 23-23

🤖 Prompt for AI Agents
In src/main/resources/logback-spring.xml around lines 15-16 and line 23, the
springProperty defaultValue is "unknown" while elsewhere the template uses
${ENVIRONMENT:-dev}, causing inconsistent defaults; make them consistent by
changing the springProperty defaultValue to "dev" (or alternatively change the
${ENVIRONMENT:-dev} usage to ${ENVIRONMENT:-unknown}) so all references use the
same fallback value, and ensure any comments/documentation reflect the chosen
default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant