Conversation
Signed-off-by: Moritz Johner <beller.moritz@googlemail.com>
Signal intelligence layer for AI-driven incident investigation: - Signal anchors linking metrics → roles → workloads - Dashboard quality scoring and role classification - Baseline & anomaly detection with hybrid collection - 8 MCP tools: Orient → Narrow → Investigate → Hypothesize → Verify Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
54 requirements across 8 categories: - Signal Schema (8) - Role Classification (6) - Dashboard Quality (5) - Ingestion Pipeline (6) - Baseline Storage (6) - Anomaly Detection (6) - Observatory API (8) - MCP Tools (16) Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
Phases: 24. Data Model & Ingestion: signal anchors, role classification, quality scoring, pipeline (25 requirements) 25. Baseline & Anomaly Detection: rolling stats, hybrid collection, anomaly scoring (12 requirements) 26. Observatory API & MCP Tools: 8 progressive disclosure tools (24 requirements) All 61 milestone requirements mapped to phases. Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
Phase 24: Data Model & Ingestion - Implementation decisions documented - Phase boundary established Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
Phase 24: Data Model & Ingestion - Standard stack identified - Architecture patterns documented - Pitfalls catalogued Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 24: Data Model & Ingestion - 4 plan(s) in 4 wave(s) - 3 parallel (Wave 1), 2 parallel (Wave 2), 2 sequential (Waves 3-4) - Ready for execution Plans: - 24-01: SignalAnchor types, layered classifier (5 layers), quality scorer (5 factors) - 24-02: Signal extractor with multi-role support, K8s workload linker - 24-03: GraphBuilder BuildSignalGraph with MERGE upsert, DashboardSyncer integration - 24-04: Integration tests and human verification checkpoint Wave structure: - Wave 1: Foundation (types, classifier, quality scorer) - parallel - Wave 2: Extraction & linkage (signal extractor, workload linker) - parallel - Wave 3: GraphBuilder signal methods - sequential (depends on Wave 2) - Wave 4: DashboardSyncer integration - sequential (depends on Wave 3) - Wave 5: Verification checkpoint - blocking (depends on Wave 4)
- SignalRole enum with 7 roles (Availability, Latency, Errors, Traffic, Saturation, Churn, Novelty, Unknown) - SignalAnchor struct with role, confidence, quality, workload fields - ClassificationResult for layered classification output - WorkloadInference for K8s workload linkage from labels - Composite key design: metric_name + workload_namespace + workload_name - TTL via expires_at timestamp (7 days, follows v1.4 pattern) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- 5-layer classification with decreasing confidence (0.95 → 0.85-0.9 → 0.7-0.8 → 0.5 → 0) - Layer 1: Hardcoded known metrics (20+ core Prometheus metrics) - Layer 2: PromQL structure patterns (histogram_quantile, rate/increase) - Layer 3: Metric name patterns (_latency, _error, _total, _usage) - Layer 4: Panel title patterns (Error Rate, Latency, QPS, CPU) - Layer 5: Unknown classification with confidence 0 - Comprehensive test coverage for all layers and priority handling - Fixed duplicate keys in known metrics map (Rule 1 - bug fix) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- 5-factor quality computation: freshness, recent usage, alerts, ownership, completeness - Freshness: linear decay from 90 days (1.0) to 365 days (0.0) - RecentUsage: binary check from Grafana Stats API (graceful fallback) - HasAlerts: binary check with 0.2 boost to incentivize alerting - Ownership: team folder (1.0) vs General (0.5) - Completeness: description + meaningful panel titles (>50% threshold) - Formula: base = avg(4 factors), quality = min(1.0, base + alertBoost) - Quality tiers: high (>=0.7), medium (>=0.4), low (<0.4) - Comprehensive test coverage for all factors and formula Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- ExtractSignalsFromPanel transforms panel queries into SignalAnchors - Classifies each metric using 5-layer classifier from 24-01 - Filters out low-confidence (< 0.5) classifications - Integrates workload linker for K8s resource inference - Inherits quality score from source dashboard - Generates unique QueryID for graph linking - ExtractSignalsFromDashboard with deduplication - Composite key: metric_name + namespace + workload_name - Highest quality signal wins on duplicates - Updates LastSeen timestamp on duplicates Test coverage: - Single-query and multi-query panels - Quality score inheritance - Workload inference integration - Low-confidence filtering - Empty query handling - Dashboard-level deduplication - Multiple metrics across multiple panels
- InferWorkloadFromLabels infers workload from PromQL label selectors - Label priority: deployment > app.kubernetes.io/name > app > service > job > pod - Namespace-only inference for signals with namespace but no workload - Returns nil for completely unlinked signals (no labels) - Tracks InferredFrom field for debugging - Confidence: 0.9 with namespace, varies by label type (0.6-0.9) Test coverage: - Label priority order verification - Namespace inference with/without workload - Empty labels handling - Multiple labels (highest priority wins) - Standard K8s recommended labels - InferredFrom tracking - Empty workload name handling
Tasks completed: 2/2 - feat(24-02): implement signal extractor with multi-role support (1babed5) - feat(24-02): implement K8s workload linker with label priority (48eee9c) Key accomplishments: - Panel-to-SignalAnchor transformation with 5-layer classification - K8s workload inference from PromQL labels with priority order - Dashboard-level deduplication by composite key - 24 test cases passing (13 extractor + 11 linker) Duration: 4 minutes SUMMARY: .planning/phases/24-data-model-ingestion/24-02-SUMMARY.md
Extends GraphBuilder with BuildSignalGraph method for creating/updating SignalAnchor nodes in FalkorDB graph. Key features: - MERGE upsert with composite key: metric_name + workload_namespace + workload_name + integration - ON CREATE: Sets all fields including first_seen - ON MATCH: Updates role, confidence, quality_score, last_seen, expires_at (preserves first_seen) - Creates relationships: SignalAnchor->Dashboard (SOURCED_FROM), SignalAnchor->Metric (REPRESENTS), SignalAnchor->ResourceIdentity (MONITORS) - 7-day TTL via expires_at timestamp - Graceful error handling for relationship creation Tests added: - Single signal creation - MERGE idempotency (same composite key updates fields) - Multiple signals in batch - Namespace-only signals (no workload) - Empty signals array Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Modified syncDashboard to call signal extraction after dashboard sync completes. Key features: - ingestSignals helper computes quality score and extracts signals - Calls BuildSignalGraph to persist signals to graph - Graceful error handling: signal failures logged but don't fail dashboard sync - Stub methods for getAlertRuleCount, getViewsLast30Days (return 0 for now) - Signal count logged in sync completion messages Signal ingestion piggybacks on existing hourly dashboard sync, inheriting incremental sync pattern. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tasks completed: 2/2 - BuildSignalGraph with MERGE upsert and relationships - Signal extraction hook in DashboardSyncer SUMMARY: .planning/phases/24-data-model-ingestion/24-03-SUMMARY.md
- TestSignalIngestionEndToEnd: 8 subtests covering full pipeline - Known metrics Layer 1 classification (0.95 confidence) - PromQL structure Layer 2 classification (0.9 confidence) - Quality score propagation from dashboard to signals - TTL expiration (7 days) via expires_at timestamp - Signal relationships (SOURCED_FROM, REPRESENTS) - Unlinked signals with empty workload fields - Multi-query panel creating multiple signals - Idempotency via MERGE upsert - TestSignalIngestion_LowConfidenceFiltering: Verifies confidence <0.5 filtered - TestSignalIngestion_NamespaceOnlyInference: Verifies namespace-only signals 150+ lines of test coverage for signal extraction, classification, quality scoring, and graph persistence through DashboardSyncer. Follows existing test patterns from dashboard_syncer_test.go and graph_builder_test.go (mockGraphClient, no testcontainers).
Tasks completed: 2/2 - Create end-to-end signal ingestion integration test - Human verification checkpoint (APPROVED) SUMMARY: .planning/phases/24-data-model-ingestion/24-04-SUMMARY.md Phase 24 COMPLETE: Signal ingestion pipeline verified - 4 plans executed (24-01 through 24-04) - Total duration: ~25 minutes - All requirements met for Phase 25 (Baseline storage)
Phase 24 delivers signal ingestion pipeline: - SignalAnchor schema with role classification and quality scoring - 5-layer classifier (hardcoded → PromQL → name → title → unknown) - 5-factor quality scorer with alert boost - Signal extractor and K8s workload linker - GraphBuilder integration with MERGE upsert - DashboardSyncer hook for automatic ingestion - 10 end-to-end integration tests 25/61 v1.5 requirements complete. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 25: Baseline & Anomaly Detection - Implementation decisions documented - Phase boundary established
Phase 25: Baseline & Anomaly Detection - Standard stack identified (gonum/stat v0.17.0, FalkorDB) - Architecture patterns documented (SignalBaseline storage, hybrid scoring) - Pitfalls catalogued (sample variance, cold start, percentile sorting) - Code examples provided for statistical computation and aggregation Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 25: Baseline & Anomaly Detection - 5 plan(s) in 3 wave(s) - Wave 1: 25-01 (types), 25-02 (TDD scorer) - parallel - Wave 2: 25-03 (storage+collector), 25-04 (backfill+aggregation) - parallel - Wave 3: 25-05 (integration test) - sequential - Covers 12 requirements: BASE-01 through BASE-06, ANOM-01 through ANOM-06 - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Define SignalBaseline struct with identity fields matching SignalAnchor composite key - Add RollingStats struct for intermediate statistical computation - Implement ComputeRollingStatistics using gonum/stat (Mean, StdDev, Quantile) - Add InsufficientSamplesError for cold start handling - Define MinSamplesRequired constant (10 samples)
- Test basic values computation (mean, min, max, median, sample count) - Test empty input returns zero-valued RollingStats - Test single value edge case - Test percentiles with 100-value dataset (P50, P90, P99) - Test input slice is not mutated during computation - Test large dataset (1000 values) - Test standard deviation calculation accuracy - Test negative values handling - Test InsufficientSamplesError message format - Test MinSamplesRequired constant value - Test SignalBaseline and RollingStats struct fields
TDD RED phase - tests for: - Z-score computation with normalization (ANOM-01) - Percentile comparison (above P99, below min) (ANOM-02) - Hybrid scoring using MAX of both methods - Confidence calculation (ANOM-03) - Cold start handling (ANOM-04) - Alert override (ANOM-06) - Edge cases: zero stddev, negative z-scores, score bounds Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
TDD GREEN phase - implementation passing all tests: - AnomalyScore type with Score, Confidence, Method, ZScore fields - ComputeAnomalyScore function with hybrid z-score + percentile - Z-score normalized to 0-1 using sigmoid: 1 - exp(-|z|/2) - Percentile comparison for values above P99 or below Min - Final score = MAX of both methods (per CONTEXT.md) - Confidence = MIN(sampleConfidence, qualityScore) - Cold start returns InsufficientSamplesError for < 10 samples - ApplyAlertOverride for firing alerts (score=1.0, confidence=1.0) Requirements: ANOM-01 (z-score), ANOM-02 (percentile), ANOM-03 (confidence), ANOM-04 (cold start), ANOM-06 (alert override) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tasks completed: 2/2 - Task 1: SignalBaseline type and RollingStats computation - Task 2: Unit tests for rolling statistics SUMMARY: .planning/phases/25-baseline-anomaly-detection/25-01-SUMMARY.md
Tasks completed: 2/2 (TDD cycle) - RED: Add failing tests for anomaly scoring (18 tests) - GREEN: Implement hybrid anomaly scoring SUMMARY: .planning/phases/25-baseline-anomaly-detection/25-02-SUMMARY.md Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add UpsertSignalBaseline with MERGE upsert semantics - Add GetSignalBaseline for composite key lookup (nil, nil if not found) - Add GetBaselinesByWorkload with TTL filtering via expires_at - Add GetActiveSignalAnchors for baseline collection - Create HAS_BASELINE relationship from SignalAnchor to SignalBaseline - Add parsing helpers: parseFloat64, parseInt, parseInt64 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Change SignalValidationJob() return type to interface{} for API handler compatibility
- Add Prometheus connection test to Test Connection flow
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add multi-select dropdown to filter Observatory graph by node types (SignalAnchor, Alert, Dashboard, Panel, Query, Metric, Service, Workload, SignalBaseline). Includes client-side filtering of nodes and edges, contextual empty states, and footer showing filtered counts. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
FalkorDB has quirks where: - `r.deleted = false` doesn't work in WHERE clauses, but `NOT r.deleted` does - `IN ['a', 'b']` array syntax doesn't work reliably, use OR chain instead - `s.field = ''` doesn't match empty strings, use `size(s.field) = 0` These fixes enable the scrape target linker to correctly create MONITORS_WORKLOAD relationships between SignalAnchors and workloads. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add preStop hook to run `redis-cli SHUTDOWN SAVE` for clean data persistence - Disable AOF persistence (was causing crashes during replay) - Increase RDB save frequency to compensate for AOF being disabled - Increase terminationGracePeriodSeconds to 60s for graceful shutdown Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add Observatory page to visualize SignalAnchors, Alerts, Dashboards, Panels, Queries, Metrics, Services, and Workloads with their relationships. Backend: - Add observatory_graph analyzer for graph data queries - Add observatory_graph_handler API endpoint at /api/v1/observatory/graph - Support filtering by integration, namespace, and include baselines option Frontend: - Add Observatory route and navigation with telescope icon - Add D3.js force-directed graph visualization - Add node detail panel showing properties and relationships - Add collapsible legend, zoom controls - Add node type filter dropdown for filtering by resource type - Color-coded nodes by type with icons Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Metrics like container_* from kubelet/cadvisor are available for ALL pods in the cluster, not just those with direct Prometheus scrape targets. Add linkUniversalMetrics() that links SignalAnchors with container_* metrics to ALL Deployment/StatefulSet/DaemonSet workloads with confidence 0.6. This ensures metrics like container_memory_working_set_bytes are linked to all 53 workloads instead of only the 6 with direct scrape targets. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Observatory Graph improvements: - Fix node drag behavior by applying drag to all nodes, not just new ones - Match NamespaceGraph charge strength (-800) for consistent feel - Fix SelectDropdown click-outside using capture phase for SVG canvas Observatory Page improvements: - Replace integration search with client-side node search - Remove redundant labels, use descriptive placeholders instead Observatory API fix: - Add RelationshipLimitMultiplier (50x) for workload queries - Ensures universal metrics (container_*) return all workload connections - Before: 6 workloads, 50 edges; After: 53 workloads, 2500 edges Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
FalkorDB's MERGE has issues matching nodes when multiple properties include empty strings, causing duplicate SignalAnchors on each sync. Fix: Use single composite uid field for MERGE instead of multiple properties - Format: metric_name:workload_namespace:workload_name - Updated metrics_syncer.go and graph_builder.go to MERGE on uid - Added index on SignalAnchor.uid for query performance Before: 4,913 SignalAnchors with duplicates, 46,966 edges After: 311 unique SignalAnchors, 1,786 edges Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…d batch queries Add performance optimizations to reduce CPU utilization in graph sync: - State Cache: LRU cache for resource states to avoid DB queries during change detection (166x faster: 16ns vs 2678ns per lookup) - Label Index: In-memory index for fast Pod selector lookups without graph queries (~300k lookups/sec, scales to 10k+ pods) - Batch Queries: UNWIND-based Cypher queries for bulk node/edge creation reducing ~400 individual MERGE queries to ~10-15 batch queries per batch - Prometheus Metrics: Observable cache hit rates, processing times, and batch statistics for monitoring optimization effectiveness Includes comprehensive tests: performance benchmarks, regression tests for change detection, and metrics validation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The Observatory page was showing only a blue background with no feedback when data was loading. Added proper UI states for: - Loading spinner while fetching data - Fallback message when data is unexpectedly null after loading - Existing error and empty data states remain unchanged Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The hook was initializing isLoading as false, causing a brief flash where no content was shown before the useEffect ran. Now isLoading starts as true when enabled, ensuring the loading spinner shows immediately. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The relationship queries (MONITORS_WORKLOAD, CORRELATES_WITH, HAS_BASELINE) were returning edges for all SignalAnchors matching the WHERE clause, not just those included in the first query's LIMIT. This caused orphaned edges referencing non-existent nodes, crashing D3's force simulation with "node not found" errors. Fix: Skip creating edges when the source SignalAnchor node isn't in the signalIDs map (meaning it was cut off by the LIMIT). Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
- Change namespace filter from text input to single-select dropdown - Add workload dropdown with "All Workloads" default option - Extract namespace/workload values from Workload nodes - Support clearing single-select dropdowns (click to toggle or clear button) - Reset workload filter when namespace changes - Add CLAUDE.md with deployment instructions - Remove obsolete --graph-rebuild-on-start flag from Makefile Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
When filtering by namespace/workload, the query now finds SignalAnchors that are connected to workloads in that namespace via MONITORS_WORKLOAD relationships, even if the SignalAnchor itself doesn't have the namespace set directly (universal signals). This fixes the issue where filtering by namespace would exclude workloads that were monitored by universal SignalAnchors. Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
Use transform: translateX() instead of marginLeft to shift main content when sidebar expands. Transforms don't trigger layout recalculation, so the SVG graph won't resize and cause janky UX. - Outer wrapper has fixed marginLeft for collapsed sidebar - Inner wrapper uses translateX for smooth shift without resize - overflow: hidden clips translated content Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
Update scaleExtent dynamically when fitToView is called so that the calculated scale becomes the new minimum. This prevents: - View jumping when zooming in after fit-to-view - Zoom out beyond the fit-to-view level (now capped at fit scale) The issue was that fitToView could set a scale outside the fixed scaleExtent [0.1, 4], causing D3 to snap to the extent bounds on subsequent zoom operations. Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
- Add defaultObservatoryNodeTypes setting (default: SignalAnchor, Workload) - Add Observatory section in Settings page to configure default types - Auto-fit graph to view when namespace or workload filter changes - Auto-fit graph on initial data load - Use settings-based defaults instead of hardcoded empty array Co-Authored-By: Claude (claude-opus-4-5) <noreply@anthropic.com>
…ection
Grafana dashboards commonly use template variables ($datasource, ${datasource})
and special values (-- Mixed --, default) for datasource configuration. The
baseline collector was passing these values directly to the Grafana API,
causing 404 "Data source not found" errors.
This fix:
- Adds getPrometheusDatasourceUID() to query and cache the actual Prometheus
datasource UID from Grafana API
- Updates resolveDatasourceUID() to detect values needing resolution:
empty strings, variable references, and special Grafana values
- Falls back to API-discovered Prometheus datasource when template values
cannot be resolved from dashboard configuration
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…rDB panic
FalkorDB Go SDK cannot serialize Go slices ([]string, []map[string]interface{})
as query parameters, causing "Unrecognized type to convert to string" panics.
Changed all 16 batch query functions to build inline Cypher list literals
instead of using parameterized queries. Added helper functions
buildCypherMapLiteral() and buildCypherListLiteral() for constructing valid
Cypher syntax. Also enhanced escapeCypherString() to escape backslashes.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The two-phase batch processing was dropping CHANGED edges created by BuildResourceNodes() in Phase 1. These structural edges link ResourceIdentity to ChangeEvent nodes but were not being passed to applyBatchedEdgeUpdates(). Changes: - Pipeline: Include edges from Phase 1 nodeUpdates in Phase 2 edge processing so CHANGED/EMITTED_EVENT edges are written to the graph - Unit tests: Update mock client to extract UIDs from inline Cypher literals in batch queries (parameters are now nil) - Performance tests: Use testcontainers instead of requiring external FalkorDB instance, making tests self-contained Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add beta feature gating for Observatory and Integrations pages: - Create BetaFeaturesContext to track ?beta=true query parameter - Filter navigation items in Sidebar based on beta flag - Protect routes with BetaRoute wrapper that redirects to home - Beta flag persists for the session once enabled Access these features by adding ?beta=true to any URL. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add id="how-it-works" to How It Works section - Add id="incident-response" to Incident Response section - Update nav link from "Integration" to "Incident Response" Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Moritz Johner <beller.moritz@googlemail.com>
The sidebar test was checking margin-left on <main>, but the layout uses a fixed margin-left on the outer wrapper div and transform: translateX() on <main> for the expand/collapse animation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The falkordb/falkordb:latest image has a bug that causes the container to crash under load during data processing. This manifested as "connection refused" errors during Phase 2 (edge creation) of the pipeline's batch processing. Pin all FalkorDB container references to the stable v4.2.0 version to ensure test reliability. Also update handler tests to use GetGraphService() instead of GetClient() for consistency with the golden tests. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Placeholder ResourceIdentity nodes created by OWNS edge queries only had uid set, leaving r.deleted as NULL. Timeline queries using NOT r.deleted filtered these out because NOT NULL evaluates to NULL (falsy) in Cypher. Fix: use COALESCE(r.deleted, false) in query WHERE clauses and initialize r.deleted on MATCH in UpsertResourceIdentityQuery. Also unconditionally set core identity properties (kind, apiGroup, version, namespace, name) on MATCH to fix placeholder nodes. Additional improvements: - Add batch size limits (maxBatchSize=1000) for FalkorDB stability - Use actual FalkorDB stats for edge creation counts - Improve config reload E2E test reliability - Reduce watcher debug log verbosity Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Moritz Johner <beller.moritz@googlemail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.