Conversation
Implements a visual segment detector that analyzes video content using standard Android APIs (`MediaCodec`, `MediaExtractor`) without external native libraries. - Added `IVisualSegmentDetector` interface and `VisualDetectionConfig` model in `core/domain`. - Implemented `VisualSegmentDetectorImpl` in `engine` supporting: - `SCENE_CHANGE`: pHash Hamming distance on downscaled Y-plane. - `BLUR_QUALITY`: Laplacian variance/Tenengrad on Y-plane. - `FREEZE_FRAME`: SAD (Sum of Absolute Differences) between consecutive Y-planes. - `BLACK_FRAMES`: Luma threshold check. - Configured Dependency Injection in `EngineModule` to bind the new detector. - Integrated `IVisualSegmentDetector` into `VideoEditingViewModel` (`app` module) to expose results via the existing silence preview UI flow (`_silencePreviewRanges`). - Added necessary string resources for UI feedback. This implementation follows the project's Clean Architecture and avoids adding new heavy dependencies. Co-authored-by: tazztone <62671577+tazztone@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Fixed `Argument type mismatch` error in `VideoEditingViewModel.kt` by using the named argument `message` when initializing `VideoEditingUiState.Loading`. Co-authored-by: tazztone <62671577+tazztone@users.noreply.github.com>
Updated `VideoEditingViewModelTest`, `VideoEditingConcurrencyTest`, and `VideoEditingEdgeCaseTest` to pass a mock `IVisualSegmentDetector` to the `VideoEditingUseCases` constructor, fixing compilation errors in tests. Co-authored-by: tazztone <62671577+tazztone@users.noreply.github.com>
- Refactored `VisualSegmentDetectorImpl` to reduce complexity and magic numbers. - Introduced `DetectionContext` to simplify `detectLoop` logic. - Suppressed remaining `MagicNumber` violations in `VisualSegmentDetectorImpl`. - Fixed wildcard imports in `SettingsScreen.kt`. - Updated unit tests to fix compilation errors with `VideoEditingUseCases`. Co-authored-by: tazztone <62671577+tazztone@users.noreply.github.com>
- Replace Silence Cut with a unified "Smart Cut" tabbed overlay. - Implement VisualSegmentDetectorImpl with 4 strategies: Scene Change, Black Frames, Freeze Frame, and Blur Quality. - Optimize detection engine with progress reporting and coroutine cancellation. - Refactor SilenceDetectionUseCase to support SPLIT and DISCARD detection modes. - Update CustomVideoSeeker to visualize scene splits and visual detection ranges with color differentiation. - Fix compilation errors in unit tests and resolve all detekt code smells. - Update documentation in README.md to reflect new features.
Changed NO_FLAGS constant to explicit 0 value for MediaCodec.configure() method to ensure proper compilation and avoid potential type issues.
This change introduces a two-phase approach to visual segment detection: 1. First, analyze all frames at the configured sample interval and cache the results 2. Then apply filtering logic to the cached analysis to find segments The caching mechanism significantly improves performance when users adjust sensitivity or minimum segment duration, as it avoids re-analyzing the video for each parameter change. The filter logic is now debounced to prevent excessive filtering operations during slider adjustments. New files: - FrameAnalysis.kt: Data class for storing per-frame analysis results - VisualSegmentFilter.kt: Filtering logic that operates on cached frame analysis Modified files: - IVisualSegmentDetector.kt: Changed detect() to analyze() returning FrameAnalysis - VisualSegmentDetectorImpl.kt: Now performs analysis and stores per-frame metrics - VideoEditingViewModel.kt: Added caching and filtering logic - VisualDetectionOverlayController.kt: Added debounced filtering and UI state management
…e management - Add onDismiss callbacks to overlay controllers for proper cleanup - Replace smartCutContainer with smartCutOverlay in fragment layout - Add proper cleanup of detection preview when hiding overlays - Improve UI state management with proper visibility restoration - Add tooltip strings for better user guidance - Fix visual detection overlay to properly handle interval changes and caching - Add error handling for visual analysis failures with user feedback - Improve padding slider linking and value formatting - Add hasCachedAnalysis method to check for existing visual analysis results
… detection Add comprehensive tooltips to all smart cut controls for better user guidance Refactor visual detection strategy selection with improved UI and state management Update layout files to support new tooltip functionality and improved visual hierarchy Add detailed documentation strings for all new tooltip content BREAKING CHANGE: Visual detection strategy selection UI has been completely redesigned with new layout structure and tooltip system requiring updated string resources
Add lifecycle observer to properly clean up tooltip popup when the overlay is destroyed, preventing memory leaks and ensuring proper resource management.
…iew model
The visual detection progress tracking was redundant and caused unnecessary state management complexity. The detection process now runs without progress updates, simplifying the UI state flow.
fix(visual-detection): fix boundary threshold constant in silence detection
Replaced the magic number `minKeepSegmentDurationMs` with a properly named constant `BOUNDARY_THRESHOLD_MS` to improve code clarity and maintainability.
refactor(visual-detection): improve visual segment detector implementation
- Removed unnecessary `@Suppress("MagicNumber")` annotation
- Fixed downscale calculation to properly handle target dimensions
- Improved blur variance calculation with correct width/height handling
- Enhanced downscaleY function to return width/height metadata
- Fixed mean luma calculation to handle zero count case
- Improved code structure and removed redundant constants
docs: update GEMINI.md workflow documentation
Fixed formatting in the workflow commands section for better readability.
When tooltip popup is positioned too close to the top edge of the screen, it would be clipped and partially invisible. This change adjusts the positioning logic to show the tooltip below the anchor when it would otherwise be clipped at the top.
…eeker Visual segment filter now expands single-frame matches by a fixed padding (100ms or minSegmentMs if > 0) to ensure they appear as visible mask rectangles in the seeker UI. This fixes the issue where single-frame matches were not visible due to zero duration. BREAKING CHANGE: Visual segment filter behavior changed to expand single-frame matches for better UI visibility
…ation - Extracted MIN_VISIBLE_STAMP_DURATION_MS constant from hardcoded value - Added comprehensive test coverage for VisualSegmentFilter edge cases - Improved code readability and maintainability by removing magic numbers
7ae8409 to
a907f0d
Compare
Implemented visual segment detection for scene changes, blur, freeze frames, and black frames using
MediaCodecY-plane analysis. Integrated intoVideoEditingViewModelto reuse existing timeline visualization.PR created automatically by Jules for task 9545547823306881381 started by @tazztone