Skip to content

[Test Improver] tests: skip runtime factory tests when LLM not installed#1139

Draft
danielmeppiel wants to merge 1 commit intomainfrom
test-assist/fix-runtime-factory-skip-tests-1cae65ce61da29b3
Draft

[Test Improver] tests: skip runtime factory tests when LLM not installed#1139
danielmeppiel wants to merge 1 commit intomainfrom
test-assist/fix-runtime-factory-skip-tests-1cae65ce61da29b3

Conversation

@danielmeppiel
Copy link
Copy Markdown
Collaborator

🤖 Test Improver - automated AI assistant for test improvements.

Goal and Rationale

Four tests in test_runtime_factory.py hard-coded assumptions that the llm runtime is installed on the host system. In CI (and standard dev environments without the llm Python package), these tests fail unconditionally with:

ValueError: Runtime 'llm' is not available on this system
AssertionError: assert False is True

This creates noise in CI output and obscures real failures.

Approach

Added module-level skip markers that probe runtime availability at collection time using the factory's own runtime_exists() method - so the logic is consistent with production code:

_llm_available = RuntimeFactory.runtime_exists("llm")
skip_no_llm = pytest.mark.skipif(not _llm_available, reason="LLM runtime not installed on this system")
  • LLM-specific tests (test_get_runtime_by_name_llm_real, test_create_runtime_with_name_real, test_runtime_exists_llm_true) now skip gracefully when LLM is absent
  • test_get_available_runtimes_real_system refactored to be environment-agnostic (any list is valid)
  • A new test_get_available_runtimes_includes_llm test captures the LLM-specific assertion, also skippable
  • Tests for get_best_available_runtime and create_runtime (auto-detect) skip when no runtime at all is available

Coverage Impact

Before After
4 failures (LLM not installed) 0 failures, 4 skipped (correctly)

The tests still run and assert correctness when the runtime is present.

Trade-offs

  • Slightly more test code, but cleaner CI signal
  • Skip reasons are explicit and actionable

Reproducibility

# Run the fixed tests
.venv/bin/pytest tests/unit/test_runtime_factory.py -v

# Full unit suite
.venv/bin/pytest tests/unit/ -q --ignore=tests/unit/test_audit_report.py \
  --ignore=tests/unit/test_deps_update_command.py \
  --ignore=tests/unit/test_plugin_exporter_schema.py

Test Status

  • Before: 4 failures in test_runtime_factory.py
  • After: 0 failures, 4 skipped (when LLM not installed), 6 passing
  • Remaining 7 failures in test_policy_status.py are pre-existing (unrelated to this PR)

Generated by Daily Test Improver · ● 2.2M ·

To install this agentic workflow, run

gh aw add githubnext/agentics/workflows/daily-test-improver.md@b87234850bf9664d198f28a02df0f937d0447295

Four tests in test_runtime_factory.py assumed LLM is installed on the
host system. In CI (and most dev environments), only Copilot/Codex may
be present, causing spurious failures.

Fix: add module-level skip markers that probe runtime availability at
collection time. LLM-specific tests skip gracefully when 'llm' runtime
is absent; runtime-agnostic tests remain unconditional.

Also refactored test_get_available_runtimes_real_system to not assume
LLM is present, and split the LLM-specific assertion into a separate
skippable test.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@danielmeppiel danielmeppiel added automation Deprecated: use type/automation. Kept for issue history; will be removed in milestone 0.10.0. testing Deprecated: use area/testing. Kept for issue history; will be removed in milestone 0.10.0. labels May 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

automation Deprecated: use type/automation. Kept for issue history; will be removed in milestone 0.10.0. testing Deprecated: use area/testing. Kept for issue history; will be removed in milestone 0.10.0.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant