Skip to content

Establish a robust testing framework for scalability and reliability #196

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks
janetkuo opened this issue May 9, 2025 · 0 comments
Open
3 tasks
Labels
help wanted Extra attention is needed

Comments

@janetkuo
Copy link
Member

janetkuo commented May 9, 2025

With kubectl-ai recently achieving significant traction, it's crucial to establish a robust and comprehensive testing framework. This will ensure the project's continued stability, reliability, and scalability as we add new features and attract more contributors.

Tasks include but not limited to:

  • Comprehensive unit tests: to ensure individual components, functions, and logic are thoroughly tested in isolation. This includes, but is not limited to, command parsing, prompt generation logic, Kubernetes API interaction helpers, output formatting, and utility functions.
  • Integration tests: Validate interactions between kubectl-ai's components and its external dependencies where appropriate.
  • Mock frameworks that simulate LLM responses to test without real LLMs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant