Skip to content

Conversation

KariHall619
Copy link
Contributor

What type of PR is this?
implement AI model auto-detection and enhance plugin loading reliability

What this PR does / why we need it:

This PR introduces comprehensive AI plugin integration capabilities with intelligent model detection and robust plugin loading mechanisms. The changes address critical reliability issues in plugin initialization and provide a foundation for dynamic AI model management.

Key Features:

  1. AI Model Auto-Detection System (pkg/testing/model_detector.go)

    • Implements automatic detection of available AI models across multiple providers (OpenAI, Claude, local models)
    • Eliminates the need for manual model configuration
    • Provides comprehensive error handling and logging
  2. Enhanced Plugin Loading Reliability (console/atest-ui/src/views/Extension.vue)

    • Implements exponential backoff retry mechanism (up to 10 attempts with incremental delays: 50ms, 100ms, 150ms...)
    • Proper loading state management to prevent UI freezing
    • Comprehensive error handling with detailed logging
    • Optimized CSS/JS loading sequence
  3. AI Plugin Communication Infrastructure (pkg/testing/remote/grpc_store.go)

    • Extends gRPC store to support AI operations (ai.generate, ai.capabilities)
    • Implements parameter encoding and response conversion for AI queries
    • Standardizes AI response format for consistent integration
  4. Configuration Optimization (cmd/testdata/stores.yaml)

    • Updates AI plugin to use Unix socket communication (unix:///tmp/atest-ext-ai.sock)
    • Enables auto-detection with improved model description
    • Removes hardcoded model defaults to support dynamic detection
  5. Development Environment Improvements

    • Fixed extension menu parameter passing in frontend routing
    • Updated Vite configuration to avoid port conflicts (5173 → 5174)
    • Complete unit test coverage for model detection functionality

Why we need it:

  • Reliability: The original plugin loading was fragile and failed when plugins needed initialization time
  • User Experience: Auto-detection eliminates manual configuration and reduces setup complexity
  • Scalability: Provides a robust foundation for supporting multiple AI providers and models
  • Maintainability: Comprehensive error handling and logging improve debugging and monitoring

Testing:

  • Added 91 lines of unit tests for model detection functionality
  • Covers various detection scenarios and edge cases
  • Ensures stability and reliability of the AI integration system

This PR transforms the AI plugin system from a basic proof-of-concept into a production-ready, intelligent, and reliable integration platform.

@KariHall619
Copy link
Contributor Author

I am thinking about whether model-detectors should be incorporated into the ai plugin...

@LinuxSuRen LinuxSuRen added the ospp 开源之夏 https://summer-ospp.ac.cn/ label Sep 29, 2025
Adds support for AI-powered features via a new extension mechanism.
This includes dynamically loading the AI extension's CSS and JS,
and retrying the plugin mounting process with exponential backoff
to ensure proper initialization.

Also, provides a basic GRPC querying functionality which could call AI
methods, and converts the AI response to standard data format.
The AI plugin can be built from source or downloaded from a binary URL.
Restore two essential bug fixes that were incorrectly removed:

1. vite.config.ts fixes:
   - Fix test-id removal logic: only remove in production, preserve for E2E tests
   - Improve build performance: replace single chunk with optimized chunk splitting
   - Separate vue, element-plus, and vendor chunks for better caching

2. App.vue fix:
   - Fix Extension component prop: use menu.index instead of menu.name
   - Ensures AI plugin can be correctly identified and loaded

These fixes are critical for AI plugin functionality and should not be reverted.
Consolidates the vite build configuration to output a single chunk, simplifying the config and potentially improving build times in some scenarios. The previous manual chunk configuration is removed.
Updates the AI model name description to indicate that the model is automatically detected from available models.

Sets the default value for the model to an empty string, reflecting the auto-detection behavior.
…AI parameters. Adjust the stores.yaml file structure and add more configuration parameters such as provider and endpoint for AI plug-ins.
The AI plugin build process is removed from the makefile.

The AI plugin is assumed to be pre-built or handled by a separate process, simplifying the build and execution flow.
Comment on lines +490 to +494
params := map[string]string{
"model": query["model"],
"prompt": query["prompt"],
"config": query["config"],
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it's necessary? It looks like you just convert the map data into a JSON string.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Added comments in 6171a02 to clarify the intent.
The function intentionally filters out the "method" field (used for routing)
before encoding - it's not just a simple map-to-JSON conversion.

(Or we can use Go structs instead of maps to provide better type safety?)

- Remove console.log/error from Extension.vue for production readiness
- Add comments to encodeAIGenerateParams explaining field filtering logic

Resolves review feedback from yuluo-yx and LinuxSuRen
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
B Maintainability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ospp 开源之夏 https://summer-ospp.ac.cn/

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants