forked from ansible/awx
-
Notifications
You must be signed in to change notification settings - Fork 2
[pull] devel from ansible:devel #572
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pull
wants to merge
1,931
commits into
philipsd6:devel
Choose a base branch
from
ansible:devel
base: devel
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Initial requirement bump for Django CVE * Run updater script
…5873) * Dump running tasks when running out of capacity * Use same logic for max_workers and capacity * Address case where CPU capacity is the constraint * Add a test for correspondence * Fake redis to make tests work
* Updated pinned runner and receptorctl in controller. * Bumped receptorctl to 1.5.4
#15965) * fix: keep processing events, even if previous event data cannot be parsed * change log level to warning
* Django url validators support ipv6, our custom URLField allow_plain_hostname feature was messing with the hostname before it was passed to Django validators. * This change dodges the allow_plan_hostname transformations if the value looks like it's an ipv6 one.
* Django url validators support ipv6, our custom URLField allow_plain_hostname feature was messing with the hostname before it was passed to Django validators. * This change dodges the allow_plan_hostname transformations if the value looks like it's an ipv6 one.
Update code to pull subscriptions from console.redhat.com instead of subscription.rhsm.redhat.com Uses service account client ID and client secret instead of username/password, which is being deprecated in July 2025. Additional changes: - In awx.awx.subscriptions module, use new service account params rather than old basic auth params - Update awx.awx.license module to use subscription_id instead of pool_id. This is due to using a different API, which identifies unique subscriptions by subscriptionID instead of pool ID. Signed-off-by: Seth Foster <fosterbseth@gmail.com> Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com> Co-authored-by: Peter Braun <pbraun@redhat.com>
* Demo of sorting hosts live test * Sort both bulk updates and add batch size to facts bulk update to resolve deadlock issue * Update tests to expect batch_size to agree with changes * Add utility method to bulk update and sort hosts and applied that to the appropriate locations Remove unused imports Add utility method for sorting bulk updates Remove try except OperationalError for loop Remove unused import of django.db.OperationalError Remove batch size as it is now on the bulk update utility method as 100 Remove batch size here since it is specified in sortedbulkupdate Add transaction.atomic to have entire transaction is run as a signle transaction before committing to the db Revert change to bulk update as it's not needed here and just sort instead Move bulk_sorted utility method into db.py and updated name to not be specific to Hosts Revise to import bulk_update_sorted.. rather than calling it as an argument Fix way I'm importing bulk_update_sorted.. Remove unneeded Host import and remove calls to bul_update as args Rebise calls to bulk_update_sorted.. to include Host in the args REmove raw_update_hosts method and replace with bulk_update_sorted_by_id in update_hosts Remove update_hosts function and replace with bulk_update_sorted_by_id Update live tests to use bulk_update_sorted_by_id Fix the fields in bulk_update to agree with test * Update functional tests to use bulk_update_sorted_by_id since update_hosts has been deleted Replace update_hosts with bulk_update_sorted_by_id Remove referenes to update_hosts Update corresponding fact cachin tests to use bulk_update_sorted_by_id Remove import of bulk_sorted_update Add code comment to live test to silence Sonarqube hotspot * Add comment NOSONAR to get rid of Sonarqube warning since this is just a test and it's not actually a security issue Get test_finish_job_fact_cache_with_existing_data passing Get test_finish_job_fact_cache_clear passing Remove reference to raw_update and replace with new bulk update utility method Add pytest.mark.django_db to appropriate tests Corrent which model is called in bulk_update_sorted_by_id Remove now unused Host import Point to where bulk_update_sorted_by_id to where that is actually being used Correct import of bulk_update_sorted_by_id Revert changes in this file to avoid db calls issue Remove @pytest.mark.django_db from unit tests Remove commented out host sorting suggested fix Fix failing tests test_pre_post_run_hook_facts_deleted_sliced & test_pre_post_run_hook_facts Remove atomic transaction line, add return, and add docstring * Fix failing test test_finish_job_fact_cache_clear & test_finish_job_fact_cache_with_existing_data --------- Co-authored-by: Alan Rominger <arominge@redhat.com>
* Bump migration history again * Fix weird random naming
Co-authored-by: Jiří Jeřábek (Jiri Jerabek) <Jerabekjirka@email.cz> Co-authored-by: Alexander Saprykin <cutwatercore@gmail.com>
* similar to aws, allow the use of the standard vmware cred
* fix replace logic so that we don't over write and stay only at vmware when ec2 is selected * add an env.json for functional testing
* Clear in-memory cache, suggested by bugbot * Clear the cache even harder than we were before * Syntax bugbot
* Fix ARM64 build failure by upgrading dev container Node.js to 18 Node.js 16.13.1 fails to extract on ARM64 in Docker BuildKit's overlay filesystem during multi-arch builds. Upgrade to Node 18 which is already used by the UI builder stage and has proper ARM64 support. * Fix collectstatic failure by setting AWX_MODE=default AWX_MODE=defaults is an intentionally "invalid" environment name that: 1. Loads only defaults.py - the base settings file without any environment-specific overrides (development_defaults.py, production_defaults.py, etc.) 2. Bypasses production checks - since "production" not in "defaults", it skips the assertion that requires /etc/tower/settings.py to exist 3. Bypasses development mode - since is_development_mode would be false This is perfect for collectstatic during container build because: - No database connection needed - No secret key needed (hence SKIP_SECRET_KEY_CHECK) - No PostgreSQL version check (hence SKIP_PG_VERSION_CHECK) - Just need minimal Django settings to collect static files
…aarch64 (#16225) Use dnf module for Node.js 18 instead of n version manager The n version manager fails to extract Node.js archives due to very long file paths in include/node/openssl/archs/ directories when running in Docker BuildKit's overlay filesystem. This causes CI build failures with tar "Cannot open: Invalid argument" errors. Switch to installing Node.js 18 directly from CentOS Stream 9's module stream which avoids the archive extraction issue entirely.
Switch to git-based installation of kubernetes python client from github.com/kubernetes-client/python at commit df31d90d6c910d6b5c883b98011c93421cac067d (release-34.0 branch). This also allows removing the urllib3<2.4.0 upper bound constraint that was previously required by kubernetes 34.1.0 from PyPI.
Introduces new Makefile targets to update and upgrade requirements files using pip-compile, both directly and via docker-runner. These additions streamline dependency management for development and CI workflows.
Refactored code to use Python's built-in datetime.timezone and zoneinfo instead of pytz for timezone handling. This modernizes the codebase and removes the dependency on pytz, aligning with current best practices for timezone-aware datetime objects.
* docs: update readthedocs.io URLs to docs.ansible.com equivalents 🤖 Generated with Claude Code https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * Update Bullhorn newsletter link in communication docs --------- Co-authored-by: Claude <noreply@anthropic.com>
…le organizations (#16170) fixed module organizations description for option notification_templates_approvals Co-authored-by: Pascal Kontschan <pascal.kontschan.extern@atruvia.de>
Assited-by: Claude
…16214) * Slightly alter history to avoid having a Django 5 related migration * Revert prior field states to be slightly more clear
- Move kubernetes from git-based install to PyPI (v35.0.0 now available) - Remove urllib3 cap comment since kubernetes 35.0.0 no longer restricts it - Update README.md upgrade blocker documentation
Remove transitive dependencies no longer needed by kubernetes 35.0.0 Removes google-auth and rsa which were transitive dependencies of the older kubernetes client but are no longer required in v35.0.0. Adds cachetools as a direct dependency since it's used by awx/conf/settings.py for TTLCache (was previously a transitive dep of google-auth).
) * Enhance OpenAPI schema with AI descriptions and fix method names Add x-ai-description extensions to API endpoints for better AI agent comprehension. Fix view method names to ensure proper drf-spectacular schema generation. * Enhance OpenAPI schema with AI descriptions and fix method names Add x-ai-description extensions to API endpoints for better AI agent comprehension. Fix view method names to ensure proper drf-spectacular schema generation.
* Collect operator logs on timeout * Set timeout back to prod value * Add e to the bash with timeout block
* WIP First pass * started removing feature flags and adjusting logic * Add decorator * moved to dispatcher decorator * updated as many as I could find * Keep callback receiver working * remove any code that is not used by the call back receiver * add back auto_max_workers * added back get_auto_max_workers into common utils * Remove control and hazmat (squash this not done) * moved status out and deleted control as no longer needed * removed unused imports * adjusted test import to pull correct method * fixed imports and addressed clusternode heartbeat test * Update function comments * Add back hazmat for config and remove baseworker * added back hazmat per @AlanCoding feedback around config * removed baseworker completely and refactored it into the callback worker * Fix dispatcher run call and remove dispatch setting * remove dispatcher mock publish setting * Adjust heartbeat arg and more formatting * fixed the call to cluster_node_heartbeat missing binder * Fix attribute error in server logs
Use the ansible-community@redhat.com alias as the contact email address.
* Added link and ref to openAPI spec for community * Update docs/docsite/rst/contributor/openapi_link.rst Co-authored-by: Don Naro <dnaro@redhat.com> * add sphinxcontrib-redoc to requirements * sphinxcontrib.redoc configuration * create openapi directory and files * update download script for both schema files * suppress warning for redoc * update labels * fix extra closing parenthesis * update schema url * exclude doc config and download script The Sphinx configuration (conf.py) and schema download script (download-json.py) are not application logic and used only for building documentation. Coverage requirements for these files are overkill. * exclude only the sphinx config file --------- Co-authored-by: Don Naro <dnaro@redhat.com>
…eaper updates (#16243) * Additional dispatcher removal simplifications and waiting repear updates * Fix double call and logging message * Implement bugbot comment, should reap running on lost instances * Add test case for new pending behavior
This setting is set in defaults.py, but currently not being used. More technically, project_update.yml is not passing this value to the insights.py action plugin. Therefore, we can safely remove references to it. insights.py already has a default oidc endpoint defined for authentication. Signed-off-by: Seth Foster <fosterbseth@gmail.com>
* Fix broken cancel logic with dispatcherd Update tests for UnifiedJob Update test assertion * Further simply cancel path
…face to dispatcherd lib (#16206) * Add dispatcherctl command * Add tests for dispatcherctl command * Exit early if sqlite3 * Switch to dispatcherd mgmt cmd * Move unwanted command options to run_dispatcher * Add test for new stuff * Update the SOS report status command * make docs always reference new command * Consistently error if given config file
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
⤵️ pull
community
component:api
component:awx_collection
component:cli
component:docs
component:ui
dependencies
Pull requests that update a dependency file
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )