diff --git a/.gitignore b/.gitignore index 273b8432f..ca491cb9f 100644 --- a/.gitignore +++ b/.gitignore @@ -35,3 +35,4 @@ doc/_build/ fence_xvm.key vm-host-table tools/kcli/etc/kcli.cfg +*.swp diff --git a/blueprints/break-out-overcloud-playbooks.rst b/blueprints/break-out-overcloud-playbooks.rst new file mode 100644 index 000000000..2201c4664 --- /dev/null +++ b/blueprints/break-out-overcloud-playbooks.rst @@ -0,0 +1,88 @@ +.. + This work is licensed under a Creative Commons Attribution 3.0 Unported + License. + + http://creativecommons.org/licenses/by/3.0/legalcode + +.. + This template should be in ReSTructured text. The filename in the git + repository should match the launchpad URL, for example a URL of + https://bugzilla.redhat.com/show_bug.cgi?id= should be named + .rst . Please do not delete any of the sections in this + template. If you have nothing to say for a whole section, just write: None + For help with syntax, see http://sphinx-doc.org/rest.html + To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html + +=========================== +Break out overcloud playbooks to match tripleo documentation +=========================== + +Introduction paragraph -- why are we doing anything? + +Problem description +=================== + +Originally spec'd out as a requirement for dell/dci integration in December 2015. +Search Google docs for dci_dell_khaleesi integration. + +Integration: +------------ +How can 3rd parties inject a custom workflow? At the moment 3rd parties to CI +are not able to inject requirements into the ci w/o making a change directly to the code path. + +Integration: +------------ +Any 3rd party changes are difficult to integrate and test. The complete matrix of +gates must be executed for any change. + +Time to results: +---------------- +Breaking out a deployment into two parts undercloud and overcloud is not sufficient +when users want to deploy a cloud by hand. If there is an issue one must start from +the beginning. + +Proposed change +=============== + +The change will breakout the overcloud playbooks to match the sections as described in [1]. +A user can follow the code in the playbooks and match it directly to documentation. + +[1] http://docs.openstack.org/developer/tripleo-docs/ + +Alternatives +------------ + +none propoposed. + +Implementation +============== + +Assignee(s) +----------- +- wes hayutin +- harry rybacki + + +Milestones +---------- + +Target Milestone for completion: + + - create directory structure that matches the tripleo documentation + - move the content of the playbooks into the new playbooks + - test virt, and baremetal deployments + - test puddle and poodle jobs + +Work Items +---------- + + - create directory structure that matches the tripleo documentation + - move the content of the playbooks into the new playbooks + - test virt, and baremetal deployments + - test puddle and poodle jobs + + +Dependencies +============ + +none diff --git a/blueprints/depends-on_rpm_build.rst b/blueprints/depends-on_rpm_build.rst new file mode 100644 index 000000000..458b8cd0c --- /dev/null +++ b/blueprints/depends-on_rpm_build.rst @@ -0,0 +1,129 @@ +.. + This work is licensed under a Creative Commons Attribution 3.0 Unported + License. + + http://creativecommons.org/licenses/by/3.0/legalcode + + +=========================== +rpm build for depends-on +=========================== + +Depends-on functionality is broken at this moment. + + +Problem description +=================== + +If a comment on a patch has a depends-on: [:codeng] + +It is supposed to get those patchs and add to the current run. For exemple is a tripleoclient needs + +a review from tht it is expected that we will gate the tripleoclient with a patched tht with that review. + +But, right now that funtionality is not working. As we are no longer building on repos under a git + +clone on the {{ base_dir }} the way that depends-on is doing right now does not work. What we need + +is for it to use the patch-rpm and build package playbook to create the packages so we can upload + +it to the test run. Furthermore to make things more complicated there are two kinds of depends-on. + +The filepath-related changes like when a patch depends on another patch from khaleesi and + +khaleesi-settings and the rpm-related when a patch depends on a change on another rpm + + +Proposed change +=============== + + +The idea is to split the depends-on playbook into two playbooks + +depends-on-repo +--------------- + +That it will update the current HEAD of the repos under the base_dir + + +depends-on-rpm +-------------- + +This will generate an extra small ksgen_settings.yml probably called extra_settings_{{num}}.yml + +that is going to be passed together with ksgen_settings.yml + +The extra_settings_.yml would be jus the needed change to the ksgen_settings and it would be + +something like: + + +.. code-block:: yaml + gating_repo: openstack-tripleo-heat-templates + patch: + dist_git: + branch: + 7-director: rhos-7.0-pmgr-rhel-7 + 8-director: rhos-8.0-director-rhel-7 + name: openstack-tripleo-heat-templates + url: 'http://pkgs.devel.redhat.com/cgit/rpms/openstack-tripleo-heat-templates' + gerrit: + branch: rhos-7.0-patches # the filled up branch from the dependend review + name: gerrit-openstack-tripleo-heat-templates + refspec: refs/changes/41/65241/9 # the filled up refspec from the dependend review + url: 'https://code.engineering.redhat.com/gerrit/openstack-tripleo-heat-templates' + upstream: + name: upstream-openstack-tripleo-heat-templates + url: https://git.openstack.org/openstack/tripleo-heat-templates + + +So a job would look like this: + + +.. code-block:: bash + # fetch dependent gating changes for khaleesi and khaleesi-settings + if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then + ansible-playbook -i local_hosts -vv playbooks/depends-on-repo.yml + fi + + # generate config + ksgen --config-dir settings generate \ + + ... yada yada yada + + --extra-vars @../khaleesi-settings/settings/product/rhos/private_settings/redhat_internal.yml \ + ksgen_settings.yml + + # fetch dependent gating changes for related rpms + if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then + ansible-playbook -i local_hosts -vv playbooks/depends-on-rpm.yml + fi + + for extra_settings in extra_settings_*.yml; do + if [ -e "$extra_settings" ] ; then + ansible-playbook -vv --extra-vars @ksgen_settings.yml --extra_vars @$extra_settings -i local_hosts playbooks/build_gate_rpm.yml; + fi; + done + #now the built rpms are in the base_dir/generated_rpms/*.rpm + + ... continue with the deployment ... + + +The second extra-vars will overwrite the common parameters of the ksgen_settings allowing us to + +build multiple packages The downside is that it will only work for the packages that we know how + +to build rpms. + + +Implementation +============== + +Assignee(s) +----------- + +Primary assignee: + + apetrich@redhat.com + + diff --git a/blueprints/ospd8-undercloud-deploys-ospd7-overcloud.rst b/blueprints/ospd8-undercloud-deploys-ospd7-overcloud.rst new file mode 100644 index 000000000..fc297e643 --- /dev/null +++ b/blueprints/ospd8-undercloud-deploys-ospd7-overcloud.rst @@ -0,0 +1,61 @@ +.. + This work is licensed under a Creative Commons Attribution 3.0 Unported + License. + + http://creativecommons.org/licenses/by/3.0/legalcode + +=========================== +Deploy a ospd-7 overcloud using an ospd-8 undercloud +=========================== + +We have some requirements from PM to deploy a ospd-7 overcloud using an +ospd-8 undercloud. PM would like this in CI's status jobs. + +Problem description +=================== + +Consult PM + +Proposed change +=============== + +- Deploy the undercloud +- Remove the tripleo-heat-templates for opsd-8 +- Install the tripleo-heat-tempeates for ospd-7 +- Rerun ksgen for ospd-8 +- Deploy + +Alternatives +------------ + +None + +Implementation +============== + +Assignee(s) +----------- +whayutin@redhat.com + +Milestones +---------- + +- Deploy the undercloud +- Remove the tripleo-heat-templates for opsd-8 +- Install the tripleo-heat-tempeates for ospd-7 +- Deploy + +Work Items +---------- + +- test deployment in a dev enviornment +- build POC job +- build new jjb builder, template +- test POC job +- test w/ baremetal +- push to production + +Dependencies +============ + +- The playbooks must be able to be called independently diff --git a/blueprints/templates/template.rst b/blueprints/templates/template.rst new file mode 100644 index 000000000..5c2320bca --- /dev/null +++ b/blueprints/templates/template.rst @@ -0,0 +1,93 @@ +.. + This work is licensed under a Creative Commons Attribution 3.0 Unported + License. + + http://creativecommons.org/licenses/by/3.0/legalcode + +.. + This template should be in ReSTructured text. The filename in the git + repository should match the launchpad URL, for example a URL of + https://bugzilla.redhat.com/show_bug.cgi?id= should be named + .rst . Please do not delete any of the sections in this + template. If you have nothing to say for a whole section, just write: None + For help with syntax, see http://sphinx-doc.org/rest.html + To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html + +=========================== +The title of your blueprint +=========================== + +Introduction paragraph -- why are we doing anything? + +Problem description +=================== + +A detailed description of the problem. + +Proposed change +=============== + +Here is where you cover the change you propose to make in detail. How do you +propose to solve this problem? + +If this is one part of a larger effort make it clear where this piece ends. In +other words, what's the scope of this effort? + +Include where in the Khaleesi tree hierarchy this will reside. + +Alternatives +------------ + +This is an optional section, where it does apply we'd just like a demonstration +that some thought has been put into why the proposed approach is the best one. + +Implementation +============== + +Assignee(s) +----------- + +Who is leading the writing of the code? Or is this a blueprint where you're +throwing it out there to see who picks it up? + +If more than one person is working on the implementation, please designate the +primary author and contact. + +Primary assignee: + + TBD: + +Can optionally can list additional ids if they intend on doing +substantial implementation work on this blueprint. + +Milestones +---------- + +Target Milestone for completion: + + TBD: As Khaleesi has no current 'release cycle' it's hard to project time lines and + allocate resources accordingly. This is something we should discuss. + +Work Items +---------- + +Work items or tasks -- break the feature up into the things that need to be +done to implement it. Those parts might end up being done by different people, +but we're mostly trying to understand the time line for implementation. + +- : + +- : + + ... + +- : + +Dependencies +============ + +- Include specific references to specs and/or blueprints in Khaleesi, or in other + projects, that this one either depends on or is related to. + +- Does this feature require any new library dependencies or code otherwise not + included in OpenStack? Or does it depend on a specific version of library? diff --git a/blueprints/tls_support_for_tht.rst b/blueprints/tls_support_for_tht.rst new file mode 100644 index 000000000..fc8fcdc82 --- /dev/null +++ b/blueprints/tls_support_for_tht.rst @@ -0,0 +1,93 @@ +.. + This work is licensed under a Creative Commons Attribution 3.0 Unported + License. + + http://creativecommons.org/licenses/by/3.0/legalcode + +========================================== +Add tls support for Tripleo Heat Templates +========================================== + +In order for us to have a ssl installation of openstack we need some configuration templates files in place. + +I'm proposing a python library file to extract the templates from tht and export the filled up files were we can use to create the overcloud + +Problem description +=================== + +The problem is two fold and arises that the specific templates are versioned on tht, storing them on khaleesi +(keeping two versions on different projects of the same file) just creates technical debt. + + +The second part of the problem is that the templates themselves are not good for sed-ing in the parameters +that we need due that we have to add the certificate and key pems into a yaml + +This is a snip of the template:: + + parameter_defaults: + SSLKey: | + The contents of the private key go here + ... + +We need it filled like this:: + + parameter_defaults: + SSLKey: | + -----BEGIN PRIVATE KEY----- + MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC3TWYoCCDKdgFA + 0q4/OcbB3N15h8lWtM3BI2KzqOMa4sUhVcPvT5Sp7c0hNPu7yDwfRVBFCr+eyEIC + o+1pSffmnp8Gzo4VeDWFcRjFymWXTw/fi8j1lj8jMZGH6lkqpgLDd1koUIxXpOKm + HCkz2rdEdYpjjmGkNCUnAe3xlwolJAFXwg1GyPXPxAJ/r6Ylp+/9COxR7OCpbPGr + lrN2q1rvntI4SrTfo+lX3lIlqvKbCnko7AECpGocC6+uV3au1T3OBwFAPh0u8OA3 + nJHKYkb8PAPpNRiUp1KX4o/9Y+zdm8uN/AIxcm7JtzFf8jIISjYMD4i0E6JrjYzw + nVAQVmbNAgMBAAECggEBAJ8sWP9+P2tQmbn+uU0yEMSb1L8KCO6ARwPmhHlauQvJ + zEEsRt7zDjeZxr2FUuw37u2AtTmfIdLyN1AvpaP+lYTwTUwN5hgCsQdVtJtdLGb+ + QtxueG26sM0Q6D1MZW3BhzjR1NxLRfN9vUtdvPHIhcivASN+qo96sKB07nkSHb8t + UVLVJORYqlwIHZP5q+U4QFHftCwpE6WvrR4CuSc0PmnBualb9I0BVeAteVecifQG + CTooUPFfF9enYuQSjnGpuaVunNxpJB69TR3YHP5N8GUXXqIXmEtuuoSpLXKCP2Xc + 7lg+uH4+VpKT2tlZOmhJU1Kk6OKoLLvDMG0Zf0eAaoECgYEA7DuHs/to81sm8fps + EHcDCl9YNzGuDbGQ2bi73ff7QWm9FyULsm1Vp1Eu3uvsQa3RRaC5oPrtRU58OcoD + n/4MEUKTDxjGUfw1NOfYPHeCSQHqM82L6AwwOMRH4qRV33hR16NDy8vhm+YseRNK + AQcMFflp/tBO8aUc6P2Ui69UftECgYEAxqQFfFRo4ItBiMT/7i2oiJ3u3YDq/C2w + l1DX+g7BD2SKHWmtxJJ4IiczxCL73tDrhLwp3kpqnyd5+6gdhgHtC/sJjx+RoThl + A5To5Fd+vCvtoJlexsXPEgnIkZFNqwstRUAaI0iTIdt/Nzymzpn/iYp0Qc0p9/vo + Unlq791C/z0CgYAnSxueZ2IkoHPQ6huRfYpG7mcI/z15T6DNZjnxiO8FCWaHdAUH + D8KgixNlxw5MOnJFx5841KQk1BI7tot10FcHg/BcIX3TY0UiYLIKFMLaC/R922G7 + HlPjDVr7quQRwLy0RpbfTjFfsiCRnxC/LQHooczsso9/CDzP0GYl+ervEQKBgATe + JBxF3UQTZYm6eiMWD1k5tY7MB/YiEH/ExWYlUmnUJuZNnqqAhF0h5MzbppxxNjRM + gCIoZLB9wSl/lymfhnWSs0tElMcEoMUTsxlVY4+s6+fRmlb4pfhlMPsQOn0Eixl1 + Vq6iqqhbvqRV4iiR8YcnU24BXxPqomjS/OHf5DJpAoGBAIIzvr9nR2W7ci3m55l0 + 5/EjeChqpVUBKiy6PGWWqj6kfeGlKnDbCJL3DBu5agc46WlJG143I3SvbgtVBwhy + MJVRj77Zqk7BnOjAczTTxP2N/Ga7ZsWzJj8AlKpxBUEB6chdj2BLL3y+/JcuOEjg + 8LMslpo4Fx5NBmNcdvie06tf + -----END PRIVATE KEY----- + ... + +Proposed change +=============== + +A simple python library can get the templates from the tht installation read the templates +as a yaml structure do the required changes and output the fully templated files to where we can call +with the -e parameter to openstack overcloud deploy + + +Alternatives +------------ + +The alternative is storing those templates on khaleesi but I feel that just brings techinical debt. + + +Implementation +============== + +I have a implementation here: https://review.gerrithub.io/#/c/259773/ + + +Assignee(s) +----------- + +Primary assignee: + +Adriano Petrich + + diff --git a/blueprints/tripleo-quickstart.rst b/blueprints/tripleo-quickstart.rst new file mode 100644 index 000000000..bb7e619a4 --- /dev/null +++ b/blueprints/tripleo-quickstart.rst @@ -0,0 +1,115 @@ +.. + This work is licensed under a Creative Commons Attribution 3.0 Unported + License. + + http://creativecommons.org/licenses/by/3.0/legalcode + + +=========================== +Replace instack-virt-setup with tripleo quickstart +=========================== + +instack-virt-setup is the official way to setup a poc virt environment for tripleo [1] + +A replacement for instack-virt-setup has been adopted by the rdo community [2][3] + + +[1] http://tripleo.org/environments/environments.html#virtual-environment +[2] https://www.rdoproject.org/rdo-manager/ +[3] https://github.com/redhat-openstack/tripleo-quickstart + +Problem description +=================== + +instack-virt-setup itself is not tested in tripleo, nor is it supported downstream. +It's not an idempotent setup of the tripleo environement and it's also not very configurable. + + +Proposed change +=============== + +- Add support for executing the tripleo quickstart to setup the undercloud and overcloud +nodes in virtual environments and then hand off to khaleesi for the overcloud deployment. + +- Update tripleo quickstart to work with the downstream ospd content. + +- Once completed this work will bring the downstream virtual deployments in line with the accepted +upstream virtual deployment + +- For puddle's the goal is to have an undercloud appliance that is simply imported and started. +The appliance will be built with the quickstart playbooks. + +- In the tripleo, rdo, or poodle workflow, if patches or updates need to be applied to the +undercloud appliance, the quickstart is already built to handle updates. + +- Provide a community standard for building the undercloud when needed. It will be much easier +to push this standard if the code is single purpose and not commingled with khaleesi. + +- Other tools whether they be ansible, python, or shell based can all interface with khaleesi +via the hosts and ssh config file. A well defined interface into khlaeesi than try to +include *everything* in khaleesi itself may prove to be valuable. + +Alternatives +------------ + +Create other tools and workflows that call libvirtd to stand up and provision virt environments +for rdo-manager/ospd + +- libvirt implementations, e.g. https://review.gerrithub.io/#/c/259615/ + +- no-op or manual + +Implementation +============== + +Assignee(s) +----------- + +myoung@redhat.com +sshnaidm@redhat.com +whayutin@redhat.com +trown@redhat.com + + +Primary assignee: + + RDO: sshnaidm@redhat.com + OSPD: myoung@redhat.com + +Milestones +---------- + +Target Milestone for completion: + + M1. Proof of Concept - create beta code and jobs to test tripelo quickstart + M2. Proof of Concept - create a branch of tripleo quickstart for downstream ospd use + M3. Design for Production - create a design for upstream/downstream quickstart + M4. Implementation + M5. Production deployment + + +Work Items +---------- + +Work items or tasks -- break the feature up into the things that need to be +done to implement it. Those parts might end up being done by different people, +but we're mostly trying to understand the time line for implementation. + + - POC rdo-manager job that executes the khaleesi provisioner, tripleo quickstart to setup the + undercloud, and hands off to khaleesi for the overcloud deployment, test and log collection. + - JJB created for rdo-manager jobs in ci.centos for the above workflow + - A ospd-7 undercloud qcow is created + - The tripleo quickstart is branched for ospd and updated to use the downstream yum repos and + adjustments are made for ospd-7 and ospd-8 + - A POC ospd job that executes the khaleesi provisioner, tripleo quickstart (ospd) to setup the + undercloud, and hands off to khaleesi for the overcloud deployment. + - A design is created for tripleo quickstart to elegently and efficently handle the subtle differences + between setting up rdo-manager and ospd director for all the supported versions. + - The design for M6 is implemented + - tripleo quickstart is formally supported in CI + + +Dependencies +============ + + diff --git a/doc/best_practices.rst b/doc/best_practices.rst new file mode 100644 index 000000000..e1e2b99ae --- /dev/null +++ b/doc/best_practices.rst @@ -0,0 +1,321 @@ +Khaleesi Best Practices Guide +============================= + +The purpose of this guide is to lay out the coding standards and best practices to be applied when +working with Khaleesi. These best practices are specific to Khlaeesi but should be in line with +general `Ansible guidelines `_. + +Each section includes: + * A 'Rule' which states the best practice to apply + * Explanations and notable exceptions + * Examples of code applying the rule and, if applicable, examples of where the exceptions would hold + +General Best Practices +---------------------- + +**Rule: Whitespace and indentation** - Use 4 spaces. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Ensure that you use 4 spaces, not tabs, to separate each level of indentation. + +Examples:: + + # BEST_PRACTICES_APPLIED + - name: set plan values for plan based ceph deployments + shell: > + source {{ instack_user_home }}/stackrc; + openstack management plan set {{ overcloud_uuid }} + -P Controller-1::CinderEnableIscsiBackend=false; + when: installer.deploy.type == 'plan' + + +**Rule: Parameter Format** - Use the YAML dictionary format when 3 or more parameters are being passed. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +When several parameters are being passed in a module, it is hard to see exactly what value each +parameter is getting. It is preferable to use the Ansible YAML syntax to pass in parameters so +that it is clear what values are being passed for each parameter. + +Examples:: + + # Step with all arguments passed in one line + - name: create .ssh dir + file: path=/home/{{ provisioner.remote_user }}/.ssh mode=0700 owner=stack group=stack state=directory + + # BEST_PRACTICE_APPLIED + - name: create .ssh dir + file: + path: /home/{{ provisioner.remote_user }}/.ssh + mode: 0700 + owner: stack + group: stack + state: directory + + +**Rule: Line Length** - Keep text under 100 characters per line. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For ease of readability, keep text to a uniform length of 100 characters or less. Some modules +are known to have issues with multi-line formatting and should be commented on if it is an issue +within your change. + +Examples:: + + # BEST_PRACTICE_APPLIED + - name: set plan values for plan based ceph deployments + shell: > + source {{ instack_user_home }}/stackrc; + source {{ instack_user_home }}/deploy-nodesrc; + openstack management plan set {{ overcloud_uuid }} + -P Controller-1::CinderEnableIscsiBackend=false + -P Controller-1::CinderEnableRbdBackend=true + -P Controller-1::GlanceBackend=rbd + -P Compute-1::NovaEnableRbdBackend=true; + when: installer.deploy.type == 'plan' + + # EXCEPTION: - When a module breaks from multi-line use, add a comment to indicate it + # The long line in this task fails when broken down + - name: copy over common environment file (virt) + local_action: > + shell pushd {{ base_dir }}/khaleesi; rsync --delay-updates -F --compress --archive --rsh \ + "ssh -F ssh.config.ansible -S none -o StrictHostKeyChecking=no" \ + {{base_dir}}/khaleesi-settings/hardware_environments/common/plan-parameter-neutron-bridge.yaml undercloud:{{ instack_user_home }}/plan-parameter-neutron-bridge.yaml + + +**Rule: Using Quotes** - Use single quotes. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Use single quotes throughout playbooks except when double quotes are required +for ``shell`` commands or enclosing ``when`` statements. + +Examples:: + + # BEST_PRACTICE_APPLIED + - name: get floating ip address + register: floating_ip_result + shell: > + source {{ instack_user_home }}/overcloudrc; + neutron floatingip-show '{{ floating_ip.stdout }}' | grep 'ip_address' | sed -e 's/|//g'; + + # EXCEPTION - shell command uses both single and double quotes + - name: copy instackenv.json to root dir + shell: > + 'ssh -t -o "StrictHostKeyChecking=no" {{ provisioner.host_cloud_user }}@{{ floating_ip.stdout }} \ + "sudo cp /home/{{ provisioner.host_cloud_user }}/instackenv.json /root/instackenv.json"' + when: provisioner.host_cloud_user != 'root' + + # EXCEPTION - enclosing a ``when`` statement + - name: copy instackenv.json to root dir + shell: > + 'ssh -t -o "StrictHostKeyChecking=no" {{ provisioner.host_cloud_user }}@{{ floating_ip.stdout }} \ + "sudo cp /home/{{ provisioner.host_cloud_user }}/instackenv.json /root/instackenv.json"' + when: "provisioner.host_cloud_user != {{ user }}" + + +**Rule: Order of Arguments** - Keep argument order consistent within a playbook. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The order of arguments is:: + + tasks: + - name: + hosts: + sudo: + module: + register: + retries: + delay: + until: + ignore_errors: + with_items: + when: + +.. Warning:: While ``name`` is not required, it is an Ansible best practice, and a Khaleesi best + practice, to `name all tasks `_. + +Examples:: + + # BEST_PRACTICE_APPLIED - polling + - name: poll for heat stack-list to go to COMPLETE + shell: > + source {{ instack_user_home }}/stackrc; + heat stack-list; + register: heat_stack_list_result + retries: 10 + delay: 180 + until: heat_stack_list_result.stdout.find("COMPLETE") != -1 + when: node_to_scale is defined + + # BEST_PRACTICE_APPLIED - looping through items + - name: remove any yum repos not owned by rpm + sudo: yes + shell: rm -Rf /etc/yum.repos.d/{{ item }} + ignore_errors: true + with_items: + - beaker-* + + +**Rule: Adding Workarounds** - Create bug reports and flags for all workarounds. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +More detailed information and examples on working with workarounds in Khaleesi can be found +in the documentation on `Handling Workarounds `_. + + +**Rule: Ansible Modules** - Use Ansible modules over ``shell`` where available. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The generic ``shell`` module should be used only when there is not a suitable Ansible module +available to do the required steps. Use the ``command`` module when a step requires a single +bash command. + +Examples:: + + # BEST_PRACTICE_APPLIED - using Ansible 'git' module rather than 'shell: git clone' + - name: clone openstack-virtual-baremetal repo + git: + repo=https://github.com/cybertron/openstack-virtual-baremetal/ + dest={{instack_user_home}}/openstack-virtual-baremetal + + # BEST_PRACTICE_APPLIED - using Openstack modules that have checks for redundancy or + # existing elements + - name: setup neutron network for floating ips + register: public_network_uuid_result + quantum_network: + state: present + auth_url: '{{ get_auth_url_result.stdout }}' + login_username: admin + login_password: '{{ get_admin_password_result.stdout }}' + login_tenant_name: admin + name: '{{ installer.network.name }}' + provider_network_type: '{{ hw_env.network_type }}' + provider_physical_network: '{{ hw_env.physical_network }}' + provider_segmentation_id: '{{ hw_env.ExternalNetworkVlanID }}' + router_external: yes + shared: no + + # EXCEPTION - using shell as there are no Ansible modules yet for updating nova quotas + - name: set neutron subnet quota to unlimited + ignore_errors: true + shell: > + source {{ instack_user_home }}/overcloudrc; + neutron quota-update --subnet -1; + neutron quota-update --network -1; + + +**Rule: Scripts** - Use scripts rather than shell for lengthy or complex bash operations. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Scripts can hide output details and debugging scripts requires the user to look in multiple +directories for the code involved. Consider using scripts over ``shell`` if the step in Ansible +requires multiple lines (more than ten), involves complex logic, or is called more than once. + +Examples:: + + # BEST_PRACTICE_APPLIED - calling Beaker checkout script, + # keeps the complexity of Beaker provisioning in a standalone script + - name: provision beaker machine with kerberos auth + register: beaker_job_status + shell: > + chdir={{base_dir}}/khaleesi-settings + {{base_dir}}/khaleesi-settings/beakerCheckOut.sh + --arch={{ provisioner.beaker_arch }} + --family={{ provisioner.beaker_family }} + --distro={{ provisioner.beaker_distro }} + --variant={{ provisioner.beaker_variant }} + --hostrequire=hostlabcontroller={{ provisioner.host_lab_controller }} + --task=/CoreOS/rhsm/Install/automatjon-keys + --keyvalue=HVM=1 + --ks_meta=ksdevice=link + --whiteboard={{ provisioner.whiteboard_message }} + --job-group={{ provisioner.beaker_group }} + --machine={{ lookup('env', 'BEAKER_MACHINE') }} + --timeout=720; + async: 7200 + poll: 180 + when: provisioner.beaker_password is not defined + + +**Rule - Roles** - Use roles for generic tasks which are applied across installers, provisioners, or testers. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Roles should be used to avoid code duplication. When using roles, take care to use debug steps and +print appropriate code output to allow users to trace the source of errors since the exact steps +are not visible directly in the playbook run. Please review the `Ansibles official best practices `_ +documentation for more information regarding role structure. + +Examples:: + + # BEST_PRACTICE_APPLIED - validate role that can be used with multiple installers + https://github.com/redhat-openstack/khaleesi/tree/master/roles/validate_openstack + + + +RDO-Manager Specific Best Practices +----------------------------------- + +The following rules apply to RDO-Manager specific playbooks and roles. + + +**Rule: Step Placement** - Place a step under the playbook directory named for where it will be executed. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The RDO-Manager related playbooks have the following directory structure:: + + |-- installer + |-- rdo-manager + |-- overcloud + |-- undercloud + | -- post-deploy + |-- rdo-manager + + +These guidelines are used when deciding where to place new steps: + + * ``undercloud`` - any step that can be executed without the overcloud + * ``overcloud`` - any step that is used to deploy the overcloud + * ``post-deploy`` - always a standalone playbook - steps executed once the overcloud is deployed + + +**Rule: Idempotency** - Any step executed post setup should be idempotent. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +RDO-Manager has some set up steps that cannot be run multiple times without cleaning up the +environment. Any step added after setup should be able to rerun without causing damage. +*Defensive programming* conditions, that check for existence or availability etc. and modify +when or how a step is run, can be added to ensure playbooks remain idempotent. + +Examples:: + + # BEST_PRACTICE_APPLIED - using Ansible modules that check for existing elements + - name: create provisioning network + register: provision_network_uuid_result + quantum_network: + state: present + auth_url: "{{ get_auth_url_result.stdout }}" + login_username: admin + login_password: "{{ get_admin_password_result.stdout }}" + login_tenant_name: admin + name: "{{ tmp.node_prefix }}provision" + + # BEST_PRACTICE_APPLIED - defensive programming, + # ignoring errors from creating a flavor that already exists + - name: create baremetal flavor + shell: > + source {{ instack_user_home }}/overcloudrc; + nova flavor-create baremetal auto 6144 50 2; + ignore_errors: true + + +Applying these Best Practices and Guidelines +-------------------------------------------- + +Before submitting a review for Khaleesi please review your changes to ensure they follow +with the best practices outlined above. + + +Contributing to this Guide +-------------------------- +Additional best practices and suggestions for improvements to the coding standards are welcome. +To contribute to this guide, please review `contribution documentation `_ +and submit a review to `GerritHub `_. diff --git a/doc/cookbook.rst b/doc/cookbook.rst index 992450500..c6e5c89a2 100644 --- a/doc/cookbook.rst +++ b/doc/cookbook.rst @@ -54,7 +54,7 @@ or on Fedora 22:: sudo dnf install -y python-virtualenv gcc -Create the virtual envionment, install ansible, and ksgen util:: +Create the virtual environment, install ansible, and ksgen util:: virtualenv venv source venv/bin/activate @@ -143,3 +143,130 @@ Cleanup After you finished your work, you can simply remove the created instances by:: ansible-playbook -vv --extra-vars @ksgen_settings.yml -i hosts playbooks/cleanup.yml + + +Building rpms +------------- +You can use khaleesi to build rpms for you. + +If you want to test manually a rpm with a patch from gerrit you can use the khaleesi infrastructure to do that. + +Setup Configuration: +```````````````````` +What you will need: + +Ansible 1.9 installed I would recommend on a virtualenv:: + + virtualenv foobar + source foobar/bin/activate + pip install ansible==1.9.2 + + +``rdopkg`` is what is going to do the heavy lifting + + https://github.com/redhat-openstack/rdopkg + +There's a public repo for the up to date version that can be installed like this:: + + wget https://copr.fedoraproject.org/coprs/jruzicka/rdopkg/repo/epel-7/jruzicka-rdopkg-epel-7.repo + sudo cp jruzicka-rdopkg-epel-7.repo /etc/yum.repos.d + + yum install -y rdopkg + +Newer fedora versions uses dnf instead of yum so for the last step use:: + + dnf install -y rdopkg + +You will aslo need a ``rhpkg`` or a ``fedpkg`` those can be obtained from yum or dnf:: + + yum install -y rhpkg + +or:: + + yum install -y fedpkg + +Again for newer fedora versions replace yum for dnf:: + + dnf install -y rhpkg + dnf install -y fedpkg + + +In khaleesi will build the package locally (on a /tmp/tmp.patch_rpm_* directory) but in +order to do that it needs a file called ``hosts_local`` on your khaleesi folder + +The ``hosts_local`` should have this content:: + + [local] + localhost ansible_connection=local + +ksgen_settings needed +````````````````````` + +Once you've got that you need to setup what gerrit patch you want to test:: + + + export GERRIT_BRANCH= + export GERRIT_REFSPEC= + export EXECUTOR_NUMBER=0; #needed for now + + +Then you'll need to load this structure into your ``ksgen_settings.yml``:: + + patch: + upstream: + name: "upstream-" + url: "https://git.openstack.org/openstack/" + gerrit: + name: "gerrit-" + url: "" + branch: "{{ lookup('env', 'GERRIT_BRANCH') }}" + refspec: "{{ lookup('env', 'GERRIT_REFSPEC') }}" + dist_git: + name: "openstack-" + url: "" + use_director: False + +There's two ways to do that: + +Either set the values via extra-vars:: + + ksgen --config-dir settings \ + generate \ + --distro=rhel-7.1 \ + --product=rhos \ + --product-version=7.0 + --extra-vars patch.upstream.name=upstream- \ + --extra-vars patch.upstream.url=https://git.openstack.org/openstack/ \ + --extra-vars patch.gerrit.name=gerrit- \ + --extra-vars patch.gerrit.url= \ + --extra-vars patch.gerrit.branch=$GERRIT_BRANCH \ + --extra-vars patch.gerrit.refspec=$GERRIT_REFSPEC \ + --extra-vars patch.dist_git.name=openstack- \ + --extra-vars patch.dist_git.url= \ + --extra-vars @../khaleesi-settings/settings/product/rhos/private_settings/redhat_internal.yml \ + ksgen_settings.yml + +Or if khaleesi already has the settings for package you are trying to build on khaleesi/settings/rpm/.yml you can do this second method:: + + ksgen --config-dir settings \ + generate \ + --distro=rhel-7.1 \ + --product=rhos \ + --product-version=7.0 + --rpm= + --extra-vars @../khaleesi-settings/settings/product/rhos/private_settings/redhat_internal.yml \ + ksgen_settings.yml + +.. Note:: At this time this second method works only for instack-undercloud, ironic, tripleo-heat-templates and python-rdomanager-oscplugin + + +Playbook usage +`````````````` + +Then just call the playbook with that ksgen_settings:: + + ansible-playbook -vv --extra-vars @ksgen_settings.yml -i local_hosts playbooks/build_gate_rpm.yml + +When the playbook is done the generated rpms will be on the ``generated_rpms`` of your ``khaleesi`` directory + + diff --git a/doc/index.rst b/doc/index.rst index 754c3c380..c10a9a20c 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -13,6 +13,7 @@ Contents: khaleesi community_guidelines + best_practices development ksgen kcli diff --git a/doc/khaleesi.rst b/doc/khaleesi.rst index b478e9e9b..5a11af7a8 100644 --- a/doc/khaleesi.rst +++ b/doc/khaleesi.rst @@ -363,6 +363,71 @@ up the environment needed for running the tests: Testers are passed to the ksgen CLI as '--tester=' argument value: pep8, unittest, functional, integration, api, tempest +Requirements: + +There is only one requirement and it's to have an jenkins-config yml file in +the root of the component directory. For example, if the component is neutron, +then there should be an neutron/jenkins-config.yml file. The name may differ +and can be set by using --extra-vars tester.component.config_file in ksgen +invocation. + +The structure of an jenkins-config should be similar to: + +----------------------- jenkins-config sample beginning------------------------ +# Khaleesi will read and execute this section only if --tester=pep8 included in ksgen invocation +pep8: + rpm_deps: [ python-neutron, python-hacking, pylint ] + remove_rpm: [] + run: tox --sitepackages -v -e pep8 2>&1 | tee ../logs/testrun.log; + +# Khaleesi will read and execute this section only if --tester=unittest included in ksgen invocation +unittest: + rpm_deps: [ python-neutron, python-cliff ] + remove_rpm: [] + run: tox --sitepackages -v -e py27 2>&1 | tee ../logs/testrun.log; + +# Common RPMs that are used by all the testers +rpm_deps: [ gcc, git, "{{ hostvars[inventory_hostname][tester.component.tox_target]['rpm_deps'] }}" ] + +# The RPMs that shouldn't be installed when running tests, no matter which tester chosen +remove_rpm: [ "{{ hostvars[inventory_hostname][tester.component.tox_target]['remove_rpm'] }}" ] + +# Any additional repos besides defaults that should be enabled to support testing +# the repos need to be already installed. this just allows you to enable them. +add_additional_repos: [ ] + +# Any repos to be disabled to support testing +# this just allows you to disable them. +remove_additional_repos: [ ] + +# Common pre-run steps for all testers +neutron_virt_run_config: + run: > + set -o pipefail; + rpm -qa > installed-rpms.txt; + truncate --size 0 requirements.txt && truncate --size 0 test-requirements.txt; + {{ hostvars[inventory_hostname][tester.component.tox_target]['run'] }} + +# Files to archive + archive: + - ../logs/testrun.log + - installed-rpms.txt + +# Main section that will be read by khaleesi +test_config: + virt: + RedHat-7: + setup: + enable_repos: "{{ add_additional_repos }}" # Optional. When you would like to look in additional places for RPMs + disable_repos: "{{ remove_additional_repos }}" # Optional. When you would like to remove repos to search + install: "{{ rpm_deps }}" # Optional. When you would like to install requirements + remove: "{{ remove_rpm }}" # Optional. When you would like to remove packages + run: "{{ neutron_virt_run_config.run }}" # A must. The actual command used to run the tests + archive: "{{ neutron_virt_run_config.archive }}" # A must. Files to archive +----------------------- jenkins-config sample end ------------------------ + +Usage: + Below are examples on how to use the different testers: To run pep8 you would use the following ksgen invocation: @@ -399,10 +464,9 @@ To run functional tests, you would use: --provisioner-site=qeos \ --distro=rhel-7.2 \ --product=rhos \ - --installer=packstack \ - --installer-config=full \ # To install single component use basic_neutron + --installer=project \ + --installer-component=heat \ --tester=functional \ - --installer-component=neutron ksgen_settings.yml To run API in-tree tests, you would use: @@ -531,7 +595,7 @@ You must create a new `local_host` file. Here again adjust the IP address of you cat < local_hosts [undercloud] - undercloud groups=undercloud ansible_ssh_host= ansible_ssh_user=stack ansible_ssh_private_key_file=~/.ssh/id_rsa + undercloud groups=undercloud ansible_host= ansible_user=stack ansible_ssh_private_key_file=~/.ssh/id_rsa [local] localhost ansible_connection=local EOF diff --git a/jenkins-jobs/builders.yaml b/jenkins-jobs/builders.yaml index f4961f5b5..49b62e9e6 100644 --- a/jenkins-jobs/builders.yaml +++ b/jenkins-jobs/builders.yaml @@ -27,7 +27,7 @@ # fetch dependent gating changes if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then - ansible-playbook -i local_hosts -vv playbooks/depends-on.yml + ansible-playbook -i local_hosts -vv playbooks/depends-on-repo.yml fi # generate config @@ -87,6 +87,86 @@ exit $result +- builder: + name: ksgen-builder-upstream + builders: + - shining-panda: + build-environment: virtualenv + python-version: system-CPython-2.7 + nature: shell + clear: false + use-distribute: true + system-site-packages: false + ignore-exit-code: false + command: | + pip install -U ansible==1.9.2 > ansible_build; ansible --version + + # install ksgen + pushd khaleesi/tools/ksgen + python setup.py install + popd + + pushd khaleesi + + cp ansible.cfg.example ansible.cfg + touch ssh.config.ansible + echo "" >> ansible.cfg + echo "[ssh_connection]" >> ansible.cfg + echo "ssh_args = -F ssh.config.ansible" >> ansible.cfg + + # fetch dependent gating changes + if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then + ansible-playbook -i local_hosts -vv playbooks/depends-on-repo.yml + fi + + # generate config + ksgen --config-dir=settings generate \ + --provisioner=centosci \ + --provisioner-site=default \ + --provisioner-distro=centos \ + --provisioner-distro-version={provisioner-distro-version} \ + --provisioner-site-user=rdo \ + --product={product} \ + --product-version={product-version} \ + --product-version-build={pin} \ + --product-version-repo={product-version-repo} \ + --distro={distro} \ + --installer={installer} \ + --installer-deploy={installer-deploy} \ + --installer-env={installer-env} \ + --installer-images={installer-images} \ + --installer-network={network} \ + --installer-network-isolation={network-isolation} \ + --installer-network-variant={network-variant} \ + --installer-post_action={installer-post_action} \ + --installer-topology={installer-topology} \ + --installer-tempest={installer-tempest} \ + --rpm=use-delorean \ + --workarounds=enabled \ + --extra-vars @../khaleesi-settings/hardware_environments/virt/network_configs/{network-isolation}/hw_settings.yml \ + ksgen_settings.yml + + # get nodes and run test + set +e + anscmd="stdbuf -oL -eL ansible-playbook -vv --extra-vars @ksgen_settings.yml" + + $anscmd -i local_hosts playbooks/gate.yml + result=$? + + infra_result=0 + $anscmd -i hosts playbooks/collect_logs.yml &> collect_logs.txt || infra_result=1 + $anscmd -i local_hosts playbooks/cleanup.yml &> cleanup.txt || infra_result=2 + + if [[ "$infra_result" != "0" && "$result" = "0" ]]; then + # if the job/test was ok, but collect_logs/cleanup failed, + # print out why the job is going to be marked as failed + result=$infra_result + cat collect_logs.txt + cat cleanup.txt + fi + + exit $result + - builder: name: ksgen-builder-rdo-manager-promote builders: @@ -116,7 +196,7 @@ # fetch dependent gating changes if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then - ansible-playbook -i local_hosts -vv playbooks/depends-on.yml + ansible-playbook -i local_hosts -vv playbooks/depends-on-repo.yml fi # generate config @@ -219,7 +299,7 @@ # fetch dependent gating changes if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then - ansible-playbook -i local_hosts -vv playbooks/depends-on.yml + ansible-playbook -i local_hosts -vv playbooks/depends-on-repo.yml fi # generate config @@ -299,7 +379,7 @@ # fetch dependent gating changes if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then - ansible-playbook -i local_hosts -vv playbooks/depends-on.yml + ansible-playbook -i local_hosts -vv playbooks/depends-on-repo.yml fi # generate config @@ -382,7 +462,7 @@ # fetch dependent gating changes if [ $GERRIT_CHANGE_COMMIT_MESSAGE ]; then - ansible-playbook -i local_hosts -vv playbooks/depends-on.yml + ansible-playbook -i local_hosts -vv playbooks/depends-on-repo.yml fi # generate config diff --git a/jenkins-jobs/defaults.yaml b/jenkins-jobs/defaults.yaml index bd7e6ba67..350bca194 100644 --- a/jenkins-jobs/defaults.yaml +++ b/jenkins-jobs/defaults.yaml @@ -159,6 +159,32 @@ keep-long-stdio: False test-stability: True +- trigger: + name: trigger-upstream-gate-rdo-manager + triggers: + - gerrit: + server-name: 'rdo-ci-openstack.org' + trigger-on: + - patchset-created-event + - comment-added-contains-event: + comment-contains-value: '(?i)^(Patch Set [0-9]+:)?( [\w\\+-]*)*(\n\n)?\s*(rdo)? ?(recheck)' + projects: + - project-compare-type: 'PLAIN' + project-pattern: 'openstack/{project}' + branches: + - branch-compare-type: 'PLAIN' + branch-pattern: '{branch}' + skip-vote: + successful: true + failed: true + unstable: true + notbuilt: true + failure-message: 'FAILURE' + successful-message: 'SUCCESS' + unstable-message: 'UNSTABLE' + custom-url: "* $JOB_NAME $BUILD_URL" + silent: true + - trigger: name: trigger-rdo-manager-gate-khaleesi triggers: diff --git a/jenkins-jobs/features.yml b/jenkins-jobs/features.yml index 708af6efa..3581f2e39 100644 --- a/jenkins-jobs/features.yml +++ b/jenkins-jobs/features.yml @@ -20,7 +20,6 @@ provisioner-site-user: 'rdo' provisioner-distro: '{provisioner-distro}' provisioner-distro-version: '{provisioner-distro-version}' - provisioner-options: 'skip_provision' product: '{product}' product-version: '{product-version}' product-version-repo: '{product-version-repo}' @@ -47,6 +46,8 @@ - tests-publishers - email: recipients: whayutin@redhat.com adarazs@redhat.com + triggers: + - timed: '@daily' - project: name: rdo-manager-centosci-feature-jobs diff --git a/jenkins-jobs/promote.yml b/jenkins-jobs/promote.yml index 3c96fd5b9..2de77e30a 100644 --- a/jenkins-jobs/promote.yml +++ b/jenkins-jobs/promote.yml @@ -13,28 +13,12 @@ - timestamps - workspace-cleanup - timeout: - type: elastic - elastic-percentage: 300 - elastic-default-timeout: 360 - timeout: 360 + type: absolute + timeout: 120 + fail: true publishers: - default-publishers -- defaults: - name: parent-promote-defaults - description: | -

Documentation: http://khaleesi.readthedocs.org/en/master/

- - concurrent: false - node: khaleesi - logrotate: - daysToKeep: 5 - artifactDaysToKeep: 5 - wrappers: - - ansicolor - - timestamps - - workspace-cleanup - - job-template: name: 'packstack-promote-{product}-{product-version}' defaults: rdo-manager-defaults @@ -68,7 +52,6 @@ - ksgen-builder-rdo-manager-promote: provisioner-distro: '{provisioner-distro}' provisioner-distro-version: '{provisioner-distro-version}' - provisioner-options: 'skip_provision' product: '{product}' product-version: '{product-version}' product-version-repo: '{product-version-repo}' @@ -88,136 +71,11 @@ - ownership: owner: whayutin at redhat.com co-owners: - - trown at redhat.com - adarazs at redhat.com publishers: - default-publishers - tests-publishers -- job-template: - name: 'promote-get-hash' - defaults: script-defaults - builders: - - shell: - !include-raw-escape: - - scripts/centos-liberty.sh - - scripts/promote-get-hash.sh - properties: - - ownership: - owner: whayutin at redhat.com - co-owners: - - trown at redhat.com - -- job-template: - name: 'promote-upload' - defaults: script-defaults - builders: - - shell: - !include-raw-escape: - - scripts/centos-liberty.sh - - scripts/promote-upload-images.sh - properties: - - ownership: - owner: whayutin at redhat.com - co-owners: - - trown at redhat.com - -- job-template: - name: 'promote-execute-promote-centos-liberty' - defaults: script-defaults - builders: - - shell: - !include-raw-escape: - - scripts/centos-liberty.sh - - scripts/promote-execute-promote.sh - properties: - - ownership: - owner: whayutin at redhat.com - co-owners: - - trown at redhat.com - - -- job-template: - name: rdo-delorean-promote-liberty - project-type: multijob - triggers: - - timed: "H */8 * * *" - defaults: parent-promote-defaults - builders: - - phase-get-hash - - phase-test-build - - phase-test-import - - phase-upload - - phase-execute-promote-centos-liberty - properties: - - ownership: - owner: whayutin@redhat.com - -- project: - name: rdo-manager-promote-jobs - jobs: - - rdo-delorean-promote-liberty - -- builder: - name: phase-get-hash - builders: - - multijob: - name: "GET THE LATEST DELOREAN YUM REPOSITORY HASH" - condition: SUCCESSFUL - projects: - - name: promote-get-hash - -- builder: - name: phase-test-build - builders: - - multijob: - name: "INSTALL / TEST (BUILD IMAGES)" - condition: UNSTABLE - projects: - - name: rdo-manager-promote-rdo-liberty-minimal_no_ceph-build_rdo_promote - kill-phase-on: NEVER - property-file: /tmp/delorean_current_hash - - name: packstack-promote-rdo-liberty - kill-phase-on: NEVER - property-file: /tmp/delorean_current_hash - -- builder: - name: phase-test-import - builders: - - multijob: - name: "INSTALL / TEST (IMPORT IMAGES)" - condition: UNSTABLE - projects: - - name: rdo-manager-promote-rdo-liberty-minimal_no_ceph-build - kill-phase-on: NEVER - property-file: /tmp/delorean_current_hash - - name: rdo-manager-promote-rdo-liberty-minimal_no_ceph-import_rdo_overcloud - kill-phase-on: NEVER - property-file: /tmp/delorean_current_hash - - name: rdo-manager-promote-rdo-liberty-minimal_ha_no_ceph-import_rdo_overcloud - kill-phase-on: NEVER - property-file: /tmp/delorean_current_hash - -- builder: - name: phase-upload - builders: - - multijob: - name: "UPLOAD IMAGES TO FILE SERVER" - condition: SUCCESSFUL - projects: - - name: promote-upload - property-file: /tmp/delorean_current_hash - -- builder: - name: phase-execute-promote-centos-liberty - builders: - - multijob: - name: "UPLOAD IMAGES TO FILE SERVER" - condition: SUCCESSFUL - projects: - - name: promote-execute-promote-centos-liberty - property-file: /tmp/delorean_current_hash - - project: name: rdo-promote-jobs installer: rdo_manager @@ -259,19 +117,3 @@ pin: latest jobs: - 'packstack-promote-{product}-{product-version}' - - -- project: - name: promote-get-hash - jobs: - - promote-get-hash - -- project: - name: promote-upload - jobs: - - promote-upload - -- project: - name: promote-execute-promote-centos-liberty - jobs: - - promote-execute-promote-centos-liberty diff --git a/jenkins-jobs/rdo-manager.yaml b/jenkins-jobs/rdo-manager.yaml index d0c306b01..6bad625d0 100644 --- a/jenkins-jobs/rdo-manager.yaml +++ b/jenkins-jobs/rdo-manager.yaml @@ -10,7 +10,6 @@ - ksgen-builder-rdo-manager: provisioner-distro: '{provisioner-distro}' provisioner-distro-version: '{provisioner-distro-version}' - provisioner-options: 'skip_provision' product: '{product}' product-version: '{product-version}' product-version-repo: '{product-version-repo}' @@ -53,7 +52,6 @@ - ksgen-builder-rdo-manager: provisioner-distro: '{provisioner-distro}' provisioner-distro-version: '{provisioner-distro-version}' - provisioner-options: 'skip_provision' product: '{product}' product-version: '{product-version}' product-version-repo: '{product-version-repo}' @@ -83,7 +81,6 @@ - ksgen-builder-rdo-manager: provisioner-distro: '{provisioner-distro}' provisioner-distro-version: '{provisioner-distro-version}' - provisioner-options: 'skip_provision' product: '{product}' product-version: '{product-version}' product-version-repo: '{product-version-repo}' @@ -112,7 +109,6 @@ - ksgen-builder-rdo-manager: provisioner-distro: '{provisioner-distro}' provisioner-distro-version: '{provisioner-distro-version}' - provisioner-options: 'skip_provision' product: '{product}' product-version: '{product-version}' product-version-repo: '{product-version-repo}' @@ -153,7 +149,6 @@ - ksgen-builder-rdo-manager: provisioner-distro: '{provisioner-distro}' provisioner-distro-version: '{provisioner-distro-version}' - provisioner-options: 'skip_provision' product: '{product}' product-version: '{product-version}' product-version-repo: '{product-version-repo}' diff --git a/jenkins-jobs/upstream.yaml b/jenkins-jobs/upstream.yaml new file mode 100644 index 000000000..6cbba30e1 --- /dev/null +++ b/jenkins-jobs/upstream.yaml @@ -0,0 +1,57 @@ +- job-template: + name: 'upstream-gate-{project}-{installer}-{product}-{product-version}-{installer-tempest}' + defaults: rdo-manager-defaults + triggers: + - trigger-upstream-gate-rdo-manager: + branch: '{branch}' + project: '{project}' + scm: + - repo-khaleesi + - repo-khaleesi-settings + builders: + - ksgen-builder-upstream: + provisioner-distro: '{provisioner-distro}' + provisioner-distro-version: '{provisioner-distro-version}' + product: '{product}' + product-version: '{product-version}' + product-version-repo: '{product-version-repo}' + distro: '{distro}' + installer: '{installer}' + installer-deploy: '{installer-deploy}' + installer-env: '{installer-env}' + installer-images: '{installer-images}' + installer-post_action: '{installer-post_action}' + installer-topology: '{installer-topology}' + installer-tempest: '{installer-tempest}' + network: '{network}' + network-isolation: '{network-isolation}' + network-variant: '{network-variant}' + pin: '{pin}' + +- project: + name: upstream-gate-jobs-rdo-manager-centosci + project: instack-undercloud + installer: rdo_manager + installer-deploy: templates + installer-env: virthost + installer-images: build + installer-post_action: none + installer-topology: minimal_no_ceph + installer-tempest: smoke + network: neutron + network-isolation: none + network-variant: ml2-vxlan + product: rdo + product-version-repo: delorean + distro: centos-7.0 + provisioner-distro: centos + provisioner-distro-version: 7 + pin: last_known_good + + jobs: + - 'upstream-gate-{project}-{installer}-{product}-{product-version}-{installer-tempest}': + product-version: liberty + branch: stable/liberty + - 'upstream-gate-{project}-{installer}-{product}-{product-version}-{installer-tempest}': + product-version: mitaka + branch: master diff --git a/library/heat_stack.py b/library/heat_stack.py index 7c5135dc4..1427d9196 100644 --- a/library/heat_stack.py +++ b/library/heat_stack.py @@ -2,13 +2,13 @@ #coding: utf-8 -*- try: - from time import sleep + import time from keystoneclient.v2_0 import client as ksclient from heatclient.client import Client from heatclient.common import template_utils from heatclient.common import utils except ImportError: - print("failed=True msg='heatclient, keystoneclient are required'") + print("failed=True msg='heatclient and keystoneclient is required'") DOCUMENTATION = ''' --- @@ -55,7 +55,7 @@ - Path of the template file to use for the stack creation required: false default: None - environment: + environment_files: description: - List of environment files that should be used for the stack creation required: false @@ -67,138 +67,144 @@ # Create a stack with given template and environment files - name: create stack heat_stack: - stack_name: test - state: present login_username: admin login_password: admin - auth_url: http://192.168.1.14:5000/v2.0 - login_tenant_name: admin + auth_url: "http://192.168.1.14:5000/v2.0" tenant_name: admin - template: /home/stack/test.yaml + stack_name: test + state: present + template: "/home/stack/ovb/templates/quintupleo.yaml" + environment_files: ['/home/stack/ovb/templates/resource-registry.yaml','/home/stack/ovb/templates/env.yaml'] + + - name: delete stack + heat_stack: + stack_name: test + state: absent + login_username: admin + login_password: admin + auth_url: "http://192.168.1.14:5000/v2.0" + tenant_name: admin ''' -_os_keystone = None -_os_tenant_id = None -_os_network_id = None -_inc = 0 - -def _get_ksclient(module, kwargs): - try: - kclient = ksclient.Client(username=kwargs.get('login_username'), - password=kwargs.get('login_password'), - tenant_name=kwargs.get('login_tenant_name'), - auth_url=kwargs.get('auth_url')) - except Exception, e: - module.fail_json(msg = "Error authenticating to the keystone: %s" %e.message) - global _os_keystone - _os_keystone = kclient - return kclient - -def _get_endpoint(module, ksclient): - try: - endpoint = ksclient.service_catalog.url_for(service_type='orchestration', endpoint_type='publicURL') - except Exception, e: - module.fail_json(msg = "Error getting network endpoint: %s" % e.message) - return endpoint - -def _set_tenant_id(module): - global _os_tenant_id - if not module.params['tenant_name']: - tenant_name = module.params['login_tenant_name'] - else: - tenant_name = module.params['tenant_name'] - - for tenant in _os_keystone.tenants.list(): - if tenant.name == tenant_name: - _os_tenant_id = tenant.id - break - if not _os_tenant_id: - module.fail_json(msg = "The tenant id cannot be found, please check the parameters") - -def _get_heat_client(module, kwargs): - _ksclient = _get_ksclient(module, kwargs) - token = _ksclient.auth_token - endpoint = _get_endpoint(module, _ksclient) - try: - heat = Client('1', endpoint=endpoint, token=token) - except Exception, e: - module.fail_json(msg = " Error in connecting to heat: %s" % e.message) - return heat - -def _create_stack(module, heat): - heat.format = 'json' - template_file = module.params['template'] - env_file = module.params['environment_files'] - tpl_files, template = template_utils.get_template_contents(template_file) - env_files, env = template_utils.process_multiple_environments_and_files(env_paths=env_file) - - stack = heat.stacks.create(stack_name=module.params['stack_name'], - template=template, - environment=env, - files=dict(list(tpl_files.items()) + list(env_files.items())), - parameters={}) - uid = stack['stack']['id'] - - stack = heat.stacks.get(stack_id=uid).to_dict() - while stack['stack_status'] == 'CREATE_IN_PROGRESS': - stack = heat.stacks.get(stack_id=uid).to_dict() - sleep(5) - if stack['stack_status'] == 'CREATE_COMPLETE': - return stack['stack']['id'] - else: - module.fail_json(msg = "Failure in creating stack: ".format(stack)) - -def _list_stack(module, heat): - fields = ['id', 'stack_name', 'stack_status', 'creation_time', - 'updated_time'] - uids = [] - stacks = heat.stacks.list() - return utils.print_list(stacks, fields) - -def _delete_stack(module, heat): - heat.stacks.delete(module.param['stack_name']) - return _list_stack - -def _get_stack_id(module, heat): - stacks = heat.stacks.list() - while True: - try: - stack = stacks.next() - if module.param['stack_name'] == stack.stack_name: - return stack.id - except StopIteration: - break +def obj_gen_to_dict(gen): + """Enumerate through generator of object and return lists of dictonaries. + """ + obj_list = [] + for obj in gen: + obj_list.append(obj.to_dict()) + return obj_list + + +class Stack(object): + + def __init__(self, kwargs): + self.client = self._get_client(kwargs) + + def _get_client(self, kwargs, endpoint_type='publicURL'): + """ get heat client """ + kclient = ksclient.Client(**kwargs) + token = kclient.auth_token + endpoint = kclient.service_catalog.url_for(service_type='orchestration', + endpoint_type=endpoint_type) + kwargs = { + 'token': token, + } + return Client('1', endpoint=endpoint, token=token) + + def create(self, name, + template_file, + env_file=None, + format='json'): + """ create heat stack with the given template and environment files """ + self.client.format = format + tpl_files, template = template_utils.get_template_contents(template_file) + env_files, env = template_utils.process_multiple_environments_and_files(env_paths=env_file) + + stack = self.client.stacks.create(stack_name=name, + template=template, + environment=env, + files=dict(list(tpl_files.items()) + list(env_files.items())), + parameters={}) + uid = stack['stack']['id'] + + stack = self.client.stacks.get(stack_id=uid).to_dict() + while stack['stack_status'] == 'CREATE_IN_PROGRESS': + stack = self.client.stacks.get(stack_id=uid).to_dict() + time.sleep(5) + if stack['stack_status'] == 'CREATE_COMPLETE': + return stack + else: return False + def list(self): + """ list created stacks """ + fields = ['id', 'stack_name', 'stack_status', 'creation_time', + 'updated_time'] + uids = [] + stacks = self.client.stacks.list() + utils.print_list(stacks, fields) + return obj_gen_to_dict(stacks) + + def delete(self, name): + """ delete stack with the given name """ + self.client.stacks.delete(name) + return self.list() + + def get_id(self, name): + """ get stack id by name """ + stacks = self.client.stacks.list() + while True: + try: + stack = stacks.next() + if name == stack.stack_name: + return stack.id + except StopIteration: + break + return False def main(): argument_spec = openstack_argument_spec() argument_spec.update(dict( stack_name = dict(required=True), template = dict(default=None), - environment_files = dict(default=None, type='dict'), + environment_files = dict(default=None, type='list'), state = dict(default='present', choices=['absent', 'present']), tenant_name = dict(default=None), )) module = AnsibleModule(argument_spec=argument_spec) - heat = _get_heat_client(module, module.params) - _set_tenant_id(module) + state = module.params['state'] + stack_name = module.params['stack_name'] + template = module.params['template'] + environment_files = module.params['environment_files'] + kwargs = { + 'username': module.params['login_username'], + 'password': module.params['login_password'], + 'tenant_name': module.params['tenant_name'], + 'auth_url': module.params['auth_url'] + } + + stack = Stack(kwargs) if module.params['state'] == 'present': - stack_id = _get_stack_id(module, heat) + stack_id = stack.get_id(stack_name) if not stack_id: - stack_id = _create_stack(module, heat) - module.exit_json(changed = True, result = "Created" , id = stack_id) + stack = stack.create(name=stack_name, + template_file=template, + env_file=environment_files) + if not stack: + module.fail_json(msg="Failed to create stack") + module.exit_json(changed = True, result = "Created" , stack = stack) else: module.exit_json(changed = False, result = "success" , id = stack_id) else: - stack_id = _get_stack_id(module, stack) + stack_id = stack.get_id(stack_name) if not stack_id: module.exit_json(changed = False, result = "success") else: - _delete_stack(module, stack, stack_id) + stack.delete(stack_name) module.exit_json(changed = True, result = "deleted") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * from ansible.module_utils.openstack import * -main() +if __name__ == '__main__': + main() diff --git a/library/rhos-release.py b/library/rhos-release.py new file mode 100644 index 000000000..862dc71ec --- /dev/null +++ b/library/rhos-release.py @@ -0,0 +1,241 @@ +#!/usr/bin/python + +# (c) 2014, Red Hat, Inc. +# Written by Yair Fried +# +# This module is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this software. If not, see . + +from os import listdir +from os.path import isfile, join + +DOCUMENTATION = ''' +--- +module: rhos-release +description: + - Add/remove RHEL-OSP repo files on RHEL systems +options: + state: + description: + - Whether to add (C(pinned), C(rolling)), or remove (C(absent)) repo files. + If C(pinned) will grab the latest available version but pin the puddle + version (dereference 'latest' links to prevent content from changing). + If C(rolling) will grab latest in "rolling-release" and keep all links + pointing to latest version. + choices: ['pinned', 'rolling', 'absent'] + default: pinned + release: + description: + - release name to find + dest: + description: + - target directory for repo files + default: "/etc/yum.repos.d" + distro: + description: + - override the default RHEL version + repo_type: + description: + - Controls the repo type C(puddle) or C(poodle) + choices: ['puddle', 'poodle'] + default: puddle + version: + description: + - Specific puddle/poodle selection. + This can be a known-symlink (Y1, Z1, GA, etc.), or + a puddle date stamp in the form of YYYY-MM-DD.X + + +notes: + - requires rhos-release version 1.0.23 +requirements: [ rhos-release ] +''' + +Examples = ''' +- name: Remove all RHEL-OSP repo files. + rhos-release: state=absent + +- name: Add latest RHEL-OSP repo files for for RHEL-OSP 7 and pin version. + rhos-release: release=7 + +- name: Add latest RHEL-OSP repo files for for RHEL-OSPd 7 and pin version. + rhos-release: release=7_director + +- name: Add latest RHEL-OSP repo files for for RHEL-OSP 7 unpinned (rolling release). + rhos-release: release=7 state=rolling + +- name: Add latest RHEL-OSP repo files for for RHEL-OSPd 7 unpinned (rolling release). + rhos-release: release=7_director state=rolling + +''' + + +REPODST = "/etc/yum.repos.d" + + +def get_repo_list(repodst): + return [f for f in listdir(repodst) if isfile(join(repodst, f)) and + f.startswith('rhos-release-') and f.endswith(".repo")] + + +def _remove_repos(module, base_cmd): + """ Remove RHEL-OSP repos files""" + + repodst = REPODST + cmd = [base_cmd, '-x'] + + if module.params["dest"]: + repodst = module.params["dest"] + cmd.extend(["-t", module.params["dest"]]) + + repo_files = get_repo_list(repodst) + if repo_files: + + rc, out, err = module.run_command(cmd) + if rc == "127": + module.fail_json(msg='Requires rhos-release installed. %s: %s' % (cmd, err)) + elif rc: + module.fail_json(msg='Error: %s: %s' % (cmd, err)) + empty_repo_files = get_repo_list(repodst) + if empty_repo_files: + module.fail_json(msg="Failed to remove files: %s" % empty_repo_files) + module.exit_json(changed=True, deleted_files=repo_files) + else: + module.exit_json(changed=False, msg="No repo files found") + + +def _parse_output(module, stdout): + """Parse rhos-release stdout. + + lines starting with "Installed": + list of repo files created. + verify all files are created in the same directory. + + lines starting with "# rhos-release": + Installed channel details + release=release number (should match "release" input), + version=version tag of release, + repo_type="poodle"/"puddle", + channel=ospd/core, + verify no more than 2 channels installed - core and/or ospd + + :return: dict( + repodir=absolute path of directory where repo files were created, + files=list of repo files created (filter output duplications), + releases=list of channels (see channel details) installed, + stdout=standard output of rhos-release, + ) + """ + file_lines = [line for line in stdout.splitlines() if line.startswith("Installed")] + + def installed(line): + pattern = re.compile(r'(?PInstalled: )(?P\S+)') + match = pattern.search(line) + if not match: + module.fail_json("Failed to parse line %s" % line) + filename = os.path.abspath(match.group("filename")) + return dict( + file=os.path.basename(filename), + repodir=os.path.dirname(filename) + ) + + filenames = map(installed, file_lines) + dirs = set(f["repodir"] for f in filenames) + if len(dirs) > 1: + module.fail_json("Found repo files in multiple directories %s" % dirs) + repodir = dirs.pop() + filenames = set(f["file"] for f in filenames) + + release_lines = [line for line in stdout.splitlines() if line.startswith("# rhos-release ")] + + def released(line): + pattern = re.compile(r'(?P# rhos-release )' + r'(?P\d+)\s*' + r'(?P-director)?\s*' + r'(?P-d)?\s*' + r'-p (?P\S+)' + ) + match = pattern.search(line) + if not match: + module.fail_json("Failed to parse line %s" % line) + return dict( + release=match.group("release"), + version=match.group("version"), + repo_type="poodle" if match.group("poodle") else "puddle", + channel="ospd" if match.group("director") else "core", + ) + + installed_releases = map(released, release_lines) + if len(installed_releases) > 2 or (len(installed_releases) == 2 and + set(r["channel"] for r in installed_releases) != set(("ospd", "core"))): + module.fail_json(msg="Can't handle more than 2 channels. 1 core, 1 ospd. Found %s" % installed_releases) + + return dict( + repodir=repodir, + files=list(filenames), + releases=installed_releases, + stdout=stdout + ) + + +def _get_latest_repos(module, base_cmd, state, release): + """ Add RHEL-OSP latest repos """ + + if not release: + module.fail_json(msg="Missing release number for '%s' state" % state) + cmd = [base_cmd, release] + if state == "pinned": + cmd.append('-P') + if module.params["dest"]: + cmd.extend(["-t", module.params["dest"]]) + if module.params["distro"]: + cmd.extend(["-r", module.params["distro"]]) + if module.params["repo_type"] == "poodle": + cmd.append("-d") + if module.params["version"]: + cmd.extend(["-p", module.params["version"]]) + + rc, out, err = module.run_command(cmd) + if rc == "127": + module.fail_json(msg='Requires rhos-release installed. %s: %s' % (cmd, err)) + elif rc: + module.fail_json(msg='Error: %s: %s' % (cmd, err)) + summary = _parse_output(module, out) + module.exit_json(changed=True, **summary) + + +def main(): + """ Main """ + module = AnsibleModule( + argument_spec = dict( + state=dict(default="pinned", choices=['absent', 'pinned', 'rolling'], required=False), + release=dict(required=True), + dest=dict(default=None, required=False), + distro=dict(default=None, required=False), + repo_type=dict(default="puddle", choices=['puddle', 'poodle'], required=False), + version=dict(default=None, required=False) + ) + ) + state = module.params["state"] + release = module.params["release"] + + base_cmd = "rhos-release" + if state == "absent": + _remove_repos(module, base_cmd) + else: + _get_latest_repos(module, base_cmd, state, release) + +# import module snippets +from ansible.module_utils.basic import * +if __name__ == '__main__': + main() diff --git a/library/tls_tht b/library/tls_tht new file mode 120000 index 000000000..1d01251ad --- /dev/null +++ b/library/tls_tht @@ -0,0 +1 @@ +tls_tht.py \ No newline at end of file diff --git a/library/tls_tht.py b/library/tls_tht.py new file mode 100644 index 000000000..0ef393ac5 --- /dev/null +++ b/library/tls_tht.py @@ -0,0 +1,129 @@ +#!/usr/bin/python +# coding: utf-8 -*- + +# (c) 2016, Adriano Petrich +# +# This module is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this software. If not, see . + +DOCUMENTATION = ''' +--- +module: tls_tht +version_added: "1.9" +short_description: Generate the tht templates for enabled ssl +description: + - Generate the tht templates for enabled ssl +options: + source_dir: + description: + - directory to copy the templates from + required: false + default: "/usr/share/openstack-tripleo-heat-templates/" + dest_dir: + description: + - were to copy the files to + required: false + default: "" + cert_filename: + description: + - the cert pem filename + required: false + default: cert.pem + cert_ca_filename: + description: + - the key pem filename + required: false + default: key.pem + key_filename: + description: + - the CA cert pem filename + required: false + default: cert.pem + + +''' + +EXAMPLES = ''' +# Generate the tht templates for enabled ssl +- tls_tht: +''' + +import yaml +from ansible.module_utils.basic import * # noqa + + +def _open_yaml(filename): + with open(filename, "r") as stream: + tmp_dict = yaml.load(stream) + return tmp_dict + + +def create_enable_file(certpem, keypem, source_dir, dest_dir): + output_dict = _open_yaml("{}environments/enable-tls.yaml".format(source_dir)) + + for key in output_dict["parameter_defaults"]["EndpointMap"]: + if output_dict["parameter_defaults"]["EndpointMap"][key]["host"] == "CLOUDNAME": + output_dict["parameter_defaults"]["EndpointMap"][key]["host"] = "IP_ADDRESS" + + output_dict["parameter_defaults"]["SSLCertificate"] = certpem + output_dict["parameter_defaults"]["SSLKey"] = keypem + + output_dict["resource_registry"]["OS::TripleO::NodeTLSData"] = \ + "{}/puppet/extraconfig/tls/tls-cert-inject.yaml".format(source_dir) + + with open("{}enable-tls.yaml".format(dest_dir), "w") as stream: + yaml.safe_dump(output_dict, stream, default_style='|') + + +def create_anchor_file(cert_ca_pem, source_dir, dest_dir): + output_dict = _open_yaml( + "{}environments/inject-trust-anchor.yaml".format(source_dir) + ) + + output_dict["parameter_defaults"]["SSLRootCertificate"] = cert_ca_pem + + output_dict["resource_registry"]["OS::TripleO::NodeTLSCAData"] = \ + "{}/puppet/extraconfig/tls/tls-cert-inject.yaml".format(source_dir) + + with open("{}inject-trust-anchor.yaml".format(dest_dir), "w") as stream: + yaml.safe_dump(output_dict, stream, default_style='|') + + +def main(): + module = AnsibleModule( + argument_spec=dict( + source_dir=dict(default="/usr/share/openstack-tripleo-heat-templates/", + required=False), + dest_dir=dict(default="", required=False), + cert_filename=dict(default="cert.pem", required=False), + cert_ca_filename=dict(default="cert.pem", required=False), + key_filename=dict(default="key.pem", required=False), + ) + ) + + with open(module.params["cert_filename"], "r") as stream: + certpem = stream.read() + + with open(module.params["cert_ca_filename"], "r") as stream: + cert_ca_pem = stream.read() + + with open(module.params["key_filename"], "r") as stream: + keypem = stream.read() + + create_enable_file(certpem, keypem, module.params["source_dir"], module.params["dest_dir"]) + create_anchor_file(cert_ca_pem, module.params["source_dir"], module.params["dest_dir"]) + module.exit_json(changed=True) + + +if __name__ == '__main__': + main() diff --git a/playbooks/adhoc/sriov/compute.yml b/playbooks/adhoc/sriov/compute.yml index 3005cc763..38973583f 100644 --- a/playbooks/adhoc/sriov/compute.yml +++ b/playbooks/adhoc/sriov/compute.yml @@ -40,6 +40,11 @@ - name: Enable neutron-sriov-nic-agent service: name=neutron-sriov-nic-agent state=started enabled=yes - - local_action: - module: wait_for_ssh reboot_first=true host={{ hostvars[inventory_hostname].ansible_ssh_host }} user={{ hostvars[inventory_hostname].ansible_ssh_user }} key={{ hostvars[inventory_hostname].ansible_ssh_private_key_file }} + - name: reboot and wait for ssh + delegate_to: localhost + wait_for_ssh: + reboot_first: true + host: "{{ hostvars[inventory_hostname].ansible_host }}" + user: "{{ hostvars[inventory_hostname].ansible_user }}" + key: "{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}" sudo: no diff --git a/playbooks/collect_logs.yml b/playbooks/collect_logs.yml index d8662815d..3589bdad3 100644 --- a/playbooks/collect_logs.yml +++ b/playbooks/collect_logs.yml @@ -156,13 +156,19 @@ ignore_errors: true - name: extract the logs - local_action: unarchive src={{ base_dir }}/khaleesi/collected_files/{{ inventory_hostname }}.tar dest={{ base_dir }}/khaleesi/collected_files/ + delegate_to: localhost + unarchive: + src: "{{ base_dir }}/khaleesi/collected_files/{{ inventory_hostname }}.tar" + dest: "{{ base_dir }}/khaleesi/collected_files/" sudo: no ignore_errors: true when: job.gzip_logs is defined and job.gzip_logs - name: delete the tar file after extraction - local_action: file path={{ base_dir }}/khaleesi/collected_files/{{ inventory_hostname }}.tar state=absent + delegate_to: localhost + file: + path: "{{ base_dir }}/khaleesi/collected_files/{{ inventory_hostname }}.tar" + state: absent sudo: no ignore_errors: true when: job.gzip_logs is defined and job.gzip_logs diff --git a/playbooks/depends-on-repo.yml b/playbooks/depends-on-repo.yml new file mode 100644 index 000000000..2a1bdb4b8 --- /dev/null +++ b/playbooks/depends-on-repo.yml @@ -0,0 +1,5 @@ +--- +- name: fetch commit dependencies on repos + roles: + - { role: depends-on, update: "repo" } + hosts: localhost diff --git a/playbooks/depends-on-rpm.yml b/playbooks/depends-on-rpm.yml new file mode 100644 index 000000000..f1bca4767 --- /dev/null +++ b/playbooks/depends-on-rpm.yml @@ -0,0 +1,5 @@ +--- +- name: fetch commit dependencies and build rpms + roles: + - { role: depends-on, update: "rpm" } + hosts: localhost diff --git a/playbooks/depends-on.yml b/playbooks/depends-on.yml deleted file mode 100644 index 26c54f14a..000000000 --- a/playbooks/depends-on.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- name: fetch commit dependencies - roles: - - depends-on - hosts: localhost diff --git a/playbooks/full-job-opendaylight.yml b/playbooks/full-job-opendaylight.yml new file mode 100644 index 000000000..51bc03801 --- /dev/null +++ b/playbooks/full-job-opendaylight.yml @@ -0,0 +1,5 @@ +--- +- include: provision.yml +- include: install.yml +- include: post-deploy/{{ installer.type }}/opendaylight/main.yml +- include: test.yml diff --git a/playbooks/full-job-patch-dvr.yml b/playbooks/full-job-patch-dvr.yml index 1fe6890fd..c17331fc9 100644 --- a/playbooks/full-job-patch-dvr.yml +++ b/playbooks/full-job-patch-dvr.yml @@ -22,10 +22,10 @@ yum: name=createrepo state=present - name: create repo folder - file: path=/home/{{ ansible_ssh_user }}/dist-git/ state=directory + file: path=/home/{{ ansible_user }}/dist-git/ state=directory - name: copy the generated rpms - copy: src={{ item }} dest=/home/{{ ansible_ssh_user }}/dist-git/{{ patch.dist_git.name }}/ + copy: src={{ item }} dest=/home/{{ ansible_user }}/dist-git/{{ patch.dist_git.name }}/ with_fileglob: - "{{ lookup('env', 'PWD') }}/generated_rpms/*.rpm" @@ -36,7 +36,7 @@ - name: Create local repo for patched rpm sudo: yes - shell: "createrepo /home/{{ ansible_ssh_user }}/dist-git/{{ patch.dist_git.name }}" + shell: "createrepo /home/{{ ansible_user }}/dist-git/{{ patch.dist_git.name }}" when: "{{ hostvars['localhost'].rpm_build_rc }} == 0" - include: install.yml diff --git a/playbooks/full-job-patch-opendaylight.yml b/playbooks/full-job-patch-opendaylight.yml new file mode 100644 index 000000000..dceb05f6d --- /dev/null +++ b/playbooks/full-job-patch-opendaylight.yml @@ -0,0 +1,44 @@ +--- +- name: Patch rpm + hosts: local + roles: + - patch_rpm + +- include: provision.yml + +- name: Create local repo for patched rpm + hosts: controller + tasks: + - name: Install release tool + sudo: yes + command: "yum localinstall -y {{ product.rpm }}" + + - name: Execute rhos-release for packstack poodle/puddle + sudo: yes + command: "rhos-release {{ product.full_version|int }} {{ product.repo.rhos_release.extra_args|join(' ') }}" + + - name: Install createrepo + sudo: yes + yum: name=createrepo state=present + + - name: create repo folder + file: path=/home/{{ ansible_user }}/dist-git/ state=directory + + - name: copy the generated rpms + copy: src={{ item }} dest=/home/{{ ansible_user }}/dist-git/{{ patch.dist_git.name }}/ + with_fileglob: + - "{{ lookup('env', 'PWD') }}/generated_rpms/*.rpm" + + - name: Setup repository for patched rpm + sudo: yes + template: "src={{ lookup('env', 'PWD') }}/roles/patch_rpm/templates/patched_rpms.j2 dest=/etc/yum.repos.d/patched_rpms.repo" + when: hostvars["localhost"].rpm_build_rc == 0 + + - name: Create local repo for patched rpm + sudo: yes + shell: "createrepo /home/{{ ansible_user }}/dist-git/{{ patch.dist_git.name }}" + when: hostvars["localhost"].rpm_build_rc == 0 + +- include: install.yml +- include: post-deploy/{{ installer.type }}/opendaylight/main.yml +- include: test.yml diff --git a/playbooks/full-job-patch.yml b/playbooks/full-job-patch.yml index 36afda670..7e729275a 100644 --- a/playbooks/full-job-patch.yml +++ b/playbooks/full-job-patch.yml @@ -22,10 +22,10 @@ yum: name=createrepo state=present - name: create repo folder - file: path=/home/{{ ansible_ssh_user }}/dist-git/ state=directory + file: path=/home/{{ ansible_user }}/dist-git/ state=directory - name: copy the generated rpms - copy: src={{ item }} dest=/home/{{ ansible_ssh_user }}/dist-git/{{ patch.dist_git.name }}/ + copy: src={{ item }} dest=/home/{{ ansible_user }}/dist-git/{{ patch.dist_git.name }}/ with_fileglob: - "{{ lookup('env', 'PWD') }}/generated_rpms/*.rpm" @@ -36,7 +36,7 @@ - name: Create local repo for patched rpm sudo: yes - shell: "createrepo /home/{{ ansible_ssh_user }}/dist-git/{{ patch.dist_git.name }}" + shell: "createrepo /home/{{ ansible_user }}/dist-git/{{ patch.dist_git.name }}" when: "{{ hostvars['localhost'].rpm_build_rc }} == 0" - include: install.yml diff --git a/playbooks/gate.yml b/playbooks/gate.yml new file mode 100644 index 000000000..e9eec627e --- /dev/null +++ b/playbooks/gate.yml @@ -0,0 +1,3 @@ +--- +- include: provision.yml +- include: installer/{{ installer.type }}/gate.yml diff --git a/playbooks/installer/packstack/post.yml b/playbooks/installer/packstack/post.yml index cc18fac40..6c55bd8f2 100644 --- a/playbooks/installer/packstack/post.yml +++ b/playbooks/installer/packstack/post.yml @@ -115,6 +115,54 @@ - name: Restart neutron service to apply changes in floating ip pool shell: openstack-service restart neutron +- name: Check if RHBZ1299563 ceilometer nova notifications are enabled + hosts: compute + gather_facts: no + sudo: yes + tasks: + - name: Check if openstack-ceilometer-compute service exists + shell: systemctl is-active openstack-ceilometer-compute + register: ceilometer_status + ignore_errors: yes + + - group_by: key=workaround_rhbz1299563 + when: workarounds.rhbz1299563 is defined and ceilometer_status.stdout == 'active' + +- name: "Workaround RHBZ1299563: Configure ceilometer nova notifications" + hosts: workaround_rhbz1299563 + gather_facts: no + sudo: yes + tasks: + - name: Set instace_usage_audit in nova.conf + ini_file: dest=/etc/nova/nova.conf + section=DEFAULT + option=instance_usage_audit + value=True + + - name: Set instace_usage_audit_period in nova.conf + ini_file: dest=/etc/nova/nova.conf + section=DEFAULT + option=instance_usage_audit_period + value=hour + + - name: Set notify_on_state_change in nova.conf + ini_file: dest=/etc/nova/nova.conf + section=DEFAULT + option=notify_on_state_change + value=vm_and_task_state + + - name: Set notification_driver in nova.conf + ini_file: dest=/etc/nova/nova.conf + section=DEFAULT + option=notification_driver + value=messagingv2 + + - name: Restart openstack-ceilometer-compute + service: name=openstack-ceilometer-compute state=restarted + + - name: Restart nova-compute + service: name=openstack-nova-compute state=restarted + - name: Post install for Neutron server hosts: controller gather_facts: no @@ -143,3 +191,42 @@ name: neutron-server state: restarted when: ha|changed or portsec|changed + +- name: Packstack post install + hosts: controller + gather_facts: yes + tasks: + # TODO(tkammer): move all params into khaleesi-settings + - name: Create external network - neutron + quantum_network: + state: present + auth_url: "http://{{ hostvars[inventory_hostname].ansible_default_ipv4.address }}:35357/v2.0/" + login_username: admin + login_password: "{{ hostvars[inventory_hostname].admin_password | default('redhat') }}" + login_tenant_name: admin + name: "{{ installer.network.name }}" + provider_network_type: "{{ installer.network.external.provider_network_type }}" + provider_physical_network: "{{ installer.network.label }}" + provider_segmentation_id: "{{ installer.network.external.vlan.tag|default(omit) }}" + router_external: yes + shared: no + admin_state_up: yes + when: installer is defined and installer.network.type == 'neutron' + + - name: Create subnet for external network - neutron + quantum_subnet: + state: present + auth_url: "http://{{ hostvars[inventory_hostname].ansible_default_ipv4.address }}:35357/v2.0/" + login_username: admin + login_password: "{{ hostvars[inventory_hostname].admin_password | default('redhat') }}" + login_tenant_name: admin + tenant_name: admin + network_name: "{{ installer.network.name }}" + name: external-subnet + enable_dhcp: False + gateway_ip: "{{ provisioner.network.network_list.external.nested.subnet_gateway }}" + cidr: "{{ provisioner.network.network_list.external.nested.subnet_cidr}}" + allocation_pool_start: "{{ provisioner.network.network_list.external.nested.allocation_pool_start }}" + allocation_pool_end: "{{ provisioner.network.network_list.external.nested.allocation_pool_end }}" + when: installer is defined and installer.network.type == 'neutron' + diff --git a/playbooks/installer/packstack/pre.yml b/playbooks/installer/packstack/pre.yml index 33f0d884b..684822044 100644 --- a/playbooks/installer/packstack/pre.yml +++ b/playbooks/installer/packstack/pre.yml @@ -1,24 +1,4 @@ --- -- name: Ensure hostname is configured properly - hosts: openstack_nodes - gather_facts: yes - sudo: yes - tasks: - - name: Configure hostname - hostname: name="{{ hostvars[inventory_hostname].inventory_hostname }}" - - - name: Ensure hostname is in /etc/hosts - lineinfile: - dest: /etc/hosts - regexp: '.*{{ inventory_hostname }}$' - line: "{{ hostvars[inventory_hostname].ansible_default_ipv4.address }} {{inventory_hostname}}" - state: present - when: hostvars[inventory_hostname].ansible_default_ipv4.address is defined - - - name: restart systemd-hostnamed - service: name=systemd-hostnamed state=restarted - when: ansible_distribution_version|int > 6 - - name: Create ssh key if one does not exist hosts: controller gather_facts: no diff --git a/playbooks/installer/packstack/repo-rhos.yml b/playbooks/installer/packstack/repo-rhos.yml index 69ac5db52..3eb0ce841 100644 --- a/playbooks/installer/packstack/repo-rhos.yml +++ b/playbooks/installer/packstack/repo-rhos.yml @@ -14,8 +14,13 @@ - name: Install release tool on machines command: "yum localinstall -y {{ product.rpm }}" - - name: Execute rhos-release for packstack poodle/puddle - command: "rhos-release {{ product.version.major }} {{ product.repo.rhos_release.extra_args|join(' ') }}" + - name: Get RHOS repo files + rhos-release: + release: "{{ product.version.major }}" + repo_type: "{{ product.repo.type }}" + state: "{{ product.repo.state }}" + distro: "{{ product.repo.distro | default(omit) }}" + dest: "{{ product.repo.dest | default(omit) }}" - name: repolist command: yum -d 7 repolist diff --git a/playbooks/installer/packstack/run.yml b/playbooks/installer/packstack/run.yml index 3d83afcda..5cff819a0 100644 --- a/playbooks/installer/packstack/run.yml +++ b/playbooks/installer/packstack/run.yml @@ -20,28 +20,28 @@ - name: Edit packstack answer-file from the config lineinfile: - dest="/root/{{ installer.packstack.answer_file }}" - regexp='{{ item.key }}=.*' - line='{{ item.key }}={{ item.value }}' + dest: "/root/{{ installer.packstack.answer_file }}" + regexp: '{{ item.key }}=.*' + line: '{{ item.key }}={{ item.value }}' with_dict: installer.packstack.config - name: Update password values in answer file with default password replace: - dest="/root/{{ installer.packstack.answer_file }}" - regexp="(.*_PASSWORD|.*_PW)=.*" - replace="\1=redhat" + dest: "/root/{{ installer.packstack.answer_file }}" + regexp: "(.*_PASSWORD|.*_PW)=.*" + replace: '\1=redhat' - name: Update network hosts replace: - dest="/root/{{ installer.packstack.answer_file }}" - regexp=^CONFIG_NETWORK_HOSTS=.*$ - replace=CONFIG_NETWORK_HOSTS="{% for host in groups.network %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}{% if not loop.last %},{% endif %}{% endfor %}" + dest: "/root/{{ installer.packstack.answer_file }}" + regexp: ^CONFIG_NETWORK_HOSTS=.*$ + replace: CONFIG_NETWORK_HOSTS={% for host in groups.network %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}{% if not loop.last %},{% endif %}{% endfor %} - name: Update compute hosts replace: - dest="/root/{{ installer.packstack.answer_file }}" - regexp=^CONFIG_COMPUTE_HOSTS=.*$ - replace=CONFIG_COMPUTE_HOSTS="{% for host in groups.compute %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}{% if not loop.last %},{% endif %}{% endfor %}" + dest: "/root/{{ installer.packstack.answer_file }}" + regexp: ^CONFIG_COMPUTE_HOSTS=.*$ + replace: CONFIG_COMPUTE_HOSTS={% for host in groups.compute %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}{% if not loop.last %},{% endif %}{% endfor %} - name: Running packstack shell: "packstack --answer-file=/root/{{ installer.packstack.answer_file }} && touch /root/packstack-already-done" diff --git a/playbooks/installer/project/post.yml b/playbooks/installer/project/post.yml index 0c9c59299..47b821209 100644 --- a/playbooks/installer/project/post.yml +++ b/playbooks/installer/project/post.yml @@ -1,3 +1,6 @@ --- - name: Component post steps hosts: controller + tasks: + - name: Set tests path + set_fact: tests_path="{{ installer.component.dir }}" diff --git a/playbooks/installer/project/pre.yml b/playbooks/installer/project/pre.yml index 250704ca7..b856e8c74 100644 --- a/playbooks/installer/project/pre.yml +++ b/playbooks/installer/project/pre.yml @@ -47,12 +47,9 @@ changed_when: "shell_result == 0" - name: Create the RHOS poodle repository - shell: "rhos-release -x {{ product.version.major }}{{ installer_host_repo | default('')}}; rhos-release -d {{ product.version.major }}" + shell: "rhos-release -x; rhos-release -d {{ product.version.major }}" when: product.repo.type is defined and product.repo.type in ['poodle'] - - name: Print installed repositores - shell: "yum repolist -d 7" - - name: print out test env hosts: controller gather_facts: yes diff --git a/playbooks/installer/rdo-manager/README.txt b/playbooks/installer/rdo-manager/README.txt index 8c3c1df08..2d5011427 100644 --- a/playbooks/installer/rdo-manager/README.txt +++ b/playbooks/installer/rdo-manager/README.txt @@ -1,10 +1,24 @@ +See http://docs.openstack.org/developer/tripleo-docs/ for details about tripleo/rdo-manager See http://khaleesi.readthedocs.org/en/master/cookbook.html for a quickstart -To *only* cleanup a virthost: -ansible-playbook -vv --extra-vars @ksgen_settings.yml -i local_hosts playbooks/installer/rdo-manager/cleanup_virthost.yml +The ansible playbooks under rdo-manager should follow the install documentation as described in the tripleo documentation as +closely as possible. -To *only* install the undercloud: -ansible-playbook -vv --extra-vars @ksgen_settings.yml -i local_hosts playbooks/installer/rdo-manager/install_undercloud.yml +If you are interested in using instack virtual provisioning (instack-virt-setup) -To *only* deploy the overcloud -ansible-playbook -vv --extra-vars @ksgen_settings.yml -i hosts playbooks/installer/rdo-manager/overcloud/main.yml + To *only* cleanup a virthost: + ansible-playbook -vv --extra-vars @ksgen_settings.yml -i local_hosts playbooks/installer/rdo-manager/cleanup_virthost.yml + + To *only* use instack-virt-setup to provision virt undercloud and overcloud nodes + ansible-playbook -vv --extra-vars @ksgen_settings.yml -i local_hosts playbooks/installer/rdo-manager/instack-virt-setup.yml + +If you are using baremetal or using libvirt w/o instack-virt-setup + + To *only* prepare your environment: + ansible-playbook -vv --extra-vars @ksgen_settings.yml -i local_hosts playbooks/installer/rdo-manager/environment-setup.yml + + To *only* install the undercloud: + ansible-playbook -vv --extra-vars @ksgen_settings.yml -i local_hosts playbooks/installer/rdo-manager/install_undercloud.yml + + To *only* deploy the overcloud + ansible-playbook -vv --extra-vars @ksgen_settings.yml -i hosts playbooks/installer/rdo-manager/overcloud/main.yml diff --git a/playbooks/installer/rdo-manager/advanced-profile-matching.yml b/playbooks/installer/rdo-manager/advanced-profile-matching.yml new file mode 100644 index 000000000..d6420466b --- /dev/null +++ b/playbooks/installer/rdo-manager/advanced-profile-matching.yml @@ -0,0 +1,2 @@ +--- +- include: overcloud/advanced-profile-matching/main.yml diff --git a/playbooks/installer/rdo-manager/cleanup_virthost.yml b/playbooks/installer/rdo-manager/cleanup_virthost.yml index 84b2ed598..4e1076567 100644 --- a/playbooks/installer/rdo-manager/cleanup_virthost.yml +++ b/playbooks/installer/rdo-manager/cleanup_virthost.yml @@ -1,3 +1,10 @@ --- -- include: "{{base_dir}}/khaleesi/playbooks/provision.yml" -- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/undercloud/cleanup-virthost.yml" +- include: "{{base_dir}}/khaleesi/playbooks/provisioner/manual/main.yml" +- name: clean up rdo-manager virthost + hosts: virthost + vars: + - ansible_user: root + roles: + - { role: cleanup_nodes/rdo-manager, + when: (installer.type == "rdo-manager" and provisioner.type == "manual") + } diff --git a/playbooks/installer/rdo-manager/deploy-overcloud-execute.yml b/playbooks/installer/rdo-manager/deploy-overcloud-execute.yml new file mode 100644 index 000000000..6f1cf347d --- /dev/null +++ b/playbooks/installer/rdo-manager/deploy-overcloud-execute.yml @@ -0,0 +1,3 @@ +--- +- include: overcloud/deploy-overcloud/run.yml +- include: overcloud/deploy-overcloud/status.yml diff --git a/playbooks/installer/rdo-manager/deploy-overcloud-prep-template-deployment.yml b/playbooks/installer/rdo-manager/deploy-overcloud-prep-template-deployment.yml new file mode 100644 index 000000000..0c40b7220 --- /dev/null +++ b/playbooks/installer/rdo-manager/deploy-overcloud-prep-template-deployment.yml @@ -0,0 +1,3 @@ +--- +- include: overcloud/deploy-overcloud/pre.yml +- include: "overcloud/deploy-overcloud/{{ installer.deploy.type | default('templates') }}/main.yml" diff --git a/playbooks/installer/rdo-manager/deploy-overcloud-prep-tuskar-deployments.yml b/playbooks/installer/rdo-manager/deploy-overcloud-prep-tuskar-deployments.yml new file mode 100644 index 000000000..0c40b7220 --- /dev/null +++ b/playbooks/installer/rdo-manager/deploy-overcloud-prep-tuskar-deployments.yml @@ -0,0 +1,3 @@ +--- +- include: overcloud/deploy-overcloud/pre.yml +- include: "overcloud/deploy-overcloud/{{ installer.deploy.type | default('templates') }}/main.yml" diff --git a/playbooks/installer/rdo-manager/environment-setup.yml b/playbooks/installer/rdo-manager/environment-setup.yml new file mode 100644 index 000000000..2df6788ec --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup.yml @@ -0,0 +1,2 @@ +--- +- include: environment-setup/main.yml diff --git a/playbooks/installer/rdo-manager/environment-setup/README.txt b/playbooks/installer/rdo-manager/environment-setup/README.txt new file mode 100644 index 000000000..49b4498c4 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/README.txt @@ -0,0 +1,7 @@ +This directory environment-setup is the correct location for tools that setup the undercloud and overcloud nodes by outside scripts or instack-virt-setup. It is recommended if using khaleesi to provision the undercloud and overcloud nodes that the playbooks/provisioner directory is used to provision the nodes while any post provision steps move here. + +Current supported environment setup types.. +- baremetal +- virthost + +http://docs.openstack.org/developer/tripleo-docs/environments/environments.html diff --git a/playbooks/installer/rdo-manager/environment-setup/baremetal/main.yml b/playbooks/installer/rdo-manager/environment-setup/baremetal/main.yml new file mode 100644 index 000000000..a1bad33e3 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/baremetal/main.yml @@ -0,0 +1,3 @@ +--- +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/user/main.yml host=undercloud" +- include: run.yml diff --git a/playbooks/installer/rdo-manager/environment-setup/baremetal/run.yml b/playbooks/installer/rdo-manager/environment-setup/baremetal/run.yml new file mode 100644 index 000000000..0bd1aa437 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/baremetal/run.yml @@ -0,0 +1,122 @@ +--- +- name: Ensure baremetal host has no yum repos installed + hosts: undercloud + vars: + - ansible_user: root + tasks: + - name: clean release rpms + yum: name={{ item }} state=absent + with_items: + - rdo-release* + - epel-release + - rhos-release + + - name: remove any yum repos not owned by rpm + shell: rm -Rf /etc/yum.repos.d/{{ item }} + with_items: + - beaker-* + +#this include calls playbooks that setup the appropriate yum repos on the undercloud +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/yum_repos/repo-{{ product.name }}.yml repo_host=undercloud" + +- name: Update packages on the host + hosts: undercloud + vars: + - ansible_user: root + tasks: + - name: repolist + command: yum -d 7 repolist + + - name: update all packages + yum: name=* state=latest + +- name: Enable ip forwarding + hosts: undercloud + vars: + - ansible_user: root + tasks: + - name: enabling ip forwarding + sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes reload=yes + when: hw_env.ip_forwarding is defined and hw_env.ip_forwarding == 'true' + +- name: Configure the baremetal undercloud + hosts: undercloud + tasks: + - name: check if instackenv.json exists in root + sudo_user: root + sudo: yes + stat: path="/root/instackenv.json" + register: instackenv_json_root + + - name: copy instackenv.json from root if it exists there + sudo_user: root + sudo: yes + shell: cp /root/instackenv.json {{ instack_user_home }}/instackenv.json + when: instackenv_json_root.stat.exists == True + + - name: get instackenv.json + synchronize: src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/instackenv.json dest={{ instack_user_home }}/instackenv.json + when: instackenv_json_root.stat.exists == False + + - name: chown instackenv.json + sudo_user: root + sudo: yes + file: path={{ instack_user_home }}/instackenv.json owner=stack group=stack + + - name: install ipmitool + sudo_user: root + sudo: yes + yum: name={{ item }} state=latest + with_items: + - OpenIPMI + - OpenIPMI-tools + + - name: install sshpass - DRACS + sudo_user: root + sudo: yes + yum: name=sshpass state=latest + when: hw_env.remote_mgmt == "dracs" + + - name: start IMPI service + shell: > + sudo chkconfig ipmi on; + sudo service ipmi start + + - name: get tools to validate instackenv.json/nodes.json + git: > + repo="https://github.com/rthallisey/clapper.git" + dest="{{instack_user_home}}/clapper" + + - name: validate instackenv.json + shell: > + chdir={{instack_user_home}} + python clapper/instackenv-validator.py -f {{ instack_user_home }}/instackenv.json + register: instackenv_validator_output + + - name: fail if instackenv.json fails validation + fail: msg="instackenv.json didn't validate." + when: instackenv_validator_output.stdout.find("SUCCESS") == -1 + + - name: get number of overcloud nodes + shell: > + export IP_LENGTH=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_addr.*' | cut -f2- -d':' | wc -l`); + echo $(($IP_LENGTH)) + register: node_length + + - name: power off node boxes - IPMI + shell: > + export IP=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_addr.*' | cut -f2- -d':' | sed 's/[},\"]//g'`); + export USER=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_user.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); + export PASSWORD=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_password.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); + ipmitool -I lanplus -H ${IP[item]} -U ${USER[item]} -P ${PASSWORD[item]} power off + with_sequence: count="{{node_length.stdout}}" + when: hw_env.remote_mgmt == "ipmi" + + - name: power off node boxes - DRACS + shell: > + export IP=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_addr.*' | cut -f2- -d':' | sed 's/[},\"]//g'`); + export USER=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_user.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); + export PASSWORD=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_password.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); + sshpass -p ${PASSWORD[item]} ssh -o "StrictHostKeyChecking=no" ${USER[item]}@${IP[item]} "racadm serveraction powerdown" + with_sequence: count="{{node_length.stdout}}" + when: hw_env.remote_mgmt == "dracs" diff --git a/playbooks/installer/rdo-manager/environment-setup/gate.yml b/playbooks/installer/rdo-manager/environment-setup/gate.yml new file mode 100644 index 000000000..a52d23210 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/gate.yml @@ -0,0 +1,2 @@ +--- +- include: virthost/gate.yml diff --git a/playbooks/installer/rdo-manager/environment-setup/main.yml b/playbooks/installer/rdo-manager/environment-setup/main.yml new file mode 100644 index 000000000..aacd60670 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/main.yml @@ -0,0 +1,3 @@ +--- +- include: "{{ installer.env.type }}/main.yml" +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/yum_repos/repo-{{ product.name }}.yml repo_host=undercloud" diff --git a/playbooks/installer/rdo-manager/environment-setup/virthost/gate.yml b/playbooks/installer/rdo-manager/environment-setup/virthost/gate.yml new file mode 100644 index 000000000..11f9370fb --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/virthost/gate.yml @@ -0,0 +1,31 @@ +--- +- name: clean up rdo-manager virthost + hosts: virthost + vars: + - ansible_user: root + roles: + - { role: cleanup_nodes/rdo-manager, + when: (installer.type == "rdo-manager" and provisioner.type == "manual") + } + +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/user/main.yml host=virthost" +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/yum_repos/repo-{{ product.name }}.yml repo_host=virthost" +- include: instack-virt-setup/gate.yml + +- name: setup the gating repo on the undercloud + hosts: virthost + tasks: + - name: set the permissions on the rpms + sudo: yes + file: path={{ generated_rpms_dir }} + recurse=yes + owner={{ provisioner.remote_user }} + group={{ provisioner.remote_user }} + mode=0755 + + - name: copy gating_repo package + shell: > + scp -F ssh.config.ansible {{ generated_rpms_dir }}/*.rpm undercloud-from-virthost:{{ instack_user_home }}/ + when: gating_repo is defined + +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/yum_repos/repo-{{ product.name }}.yml repo_host=undercloud" diff --git a/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/README.txt b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/README.txt new file mode 100644 index 000000000..e4225b712 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/README.txt @@ -0,0 +1,6 @@ +This playbook follows the documentation from tripleo as closely as possible + +This is one of many ways to prepare your undercloud environment and nodes for the overcloud. + +You can find the related documentation here: +http://docs.openstack.org/developer/tripleo-docs/environments/environments.html#preparing-the-virtual-environment-automated diff --git a/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/gate.yml b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/gate.yml new file mode 100644 index 000000000..74868de71 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/gate.yml @@ -0,0 +1,25 @@ +- name: Copy the gating package + hosts: virthost + vars: + - ansible_user: root + tasks: + - name: make temp directory + command: mktemp -d + register: temp_dir + + - name: set fact generated_rpms_dir + set_fact: generated_rpms_dir={{ temp_dir.stdout }} + + - name: copy downstream rpm package + copy: src={{ item }} dest={{ generated_rpms_dir }} + with_fileglob: + - "{{ lookup('env', 'PWD') }}/generated_rpms/*.rpm" + when: gating_repo is defined + + - name: install the generated rpm + sudo: yes + shell: "yum localinstall -y {{ generated_rpms_dir }}/*.rpm" + when: gating_repo is defined + +- include: run.yml +- include: post.yml diff --git a/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/main.yml b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/main.yml new file mode 100644 index 000000000..f7b272cd0 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/main.yml @@ -0,0 +1,3 @@ +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/yum_repos/repo-{{ product.name }}.yml repo_host=virthost" +- include: run.yml +- include: post.yml diff --git a/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/post.yml b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/post.yml new file mode 100644 index 000000000..c7bf264b1 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/post.yml @@ -0,0 +1,7 @@ +--- +- name: copy the guest image to the undercloud + hosts: virthost + tasks: + - name: upload the guest-image on the undercloud + command: scp -F ssh.config.ansible {{instack_user_home}}/{{ distro.images[distro.name][distro.full_version].guest_image_name }} undercloud-from-virthost:{{ instack_user_home }}/ + diff --git a/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/run.yml b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/run.yml new file mode 100644 index 000000000..8e5ffa9b7 --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/virthost/instack-virt-setup/run.yml @@ -0,0 +1,205 @@ +--- +- name: setup the virt host + hosts: virthost + tasks: + - name: set fact stack user home + set_fact: instack_user_home=/home/{{ provisioner.remote_user }} + + - name: get the guest-image + sudo: yes + environment: + http_proxy: "{{ installer.http_proxy_url }}" + get_url: > + url="{{ distro.images[distro.name][distro.full_version].remote_file_server }}{{ distro.images[distro.name][distro.full_version].guest_image_name }}" + dest=/root/{{ distro.images[distro.name][distro.full_version].guest_image_name }} + + - name: copy the guest-image in stack user home + sudo: yes + command: cp /root/{{ distro.images[distro.name][distro.full_version].guest_image_name }} {{instack_user_home}}/{{ distro.images[distro.name][distro.full_version].guest_image_name }} + + - name: set the right permissions for the guest-image + sudo: yes + file: > + path={{instack_user_home}}/{{ distro.images[distro.name][distro.full_version].guest_image_name }} + owner={{ provisioner.remote_user }} + group={{ provisioner.remote_user }} + + - name: install yum-plugin-priorities for rdo-manager + sudo: yes + yum: name={{item}} state=present + with_items: + - yum-plugin-priorities + when: product.name == "rdo" + + - name: install rdo-manager-deps + sudo: yes + yum: name={{item}} state=present + with_items: + - python-tripleoclient + when: product.name == "rdo" or product.full_version == "8-director" + + - name: install python-rdomanager-oscplugin + sudo: yes + yum: name=python-rdomanager-oscplugin state=present + + - name: setup environment vars + template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/virt-setup-env.j2 dest=~/virt-setup-env mode=0755 + + - name: Contents of virt-setup-env + shell: > + cat {{ instack_user_home }}/virt-setup-env + + - name: Patch instack-virt-setup to ensure dhcp.leases is not used to determine ip (workaround https://review.openstack.org/#/c/232584) + sudo: yes + lineinfile: + dest: /usr/bin/instack-virt-setup + regexp: "/var/lib/libvirt/dnsmasq/default.leases" + line: " IP=$(ip n | grep $(tripleo get-vm-mac $UNDERCLOUD_VM_NAME) | awk '{print $1;}')" + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: run instack-virt-setup + shell: > + source {{ instack_user_home }}/virt-setup-env; + instack-virt-setup > {{ instack_user_home }}/instack-virt-setup.log; + register: instack_virt_setup_result + ignore_errors: yes + + - name: destroy default pool + sudo: yes + command: virsh pool-destroy default + ignore_errors: true + when: "instack_virt_setup_result.rc !=0" + + - name: update libvirtd unix_sock_group + sudo: yes + lineinfile: + dest: /etc/libvirt/libvirtd.conf + regexp: ^unix_sock_group + line: 'unix_sock_group = "{{ provisioner.remote_user }}"' + when: "instack_virt_setup_result.rc !=0" + + - name: remove libvirt qemu capabilities cache + sudo: yes + command: rm -Rf /var/cache/libvirt/qemu/capabilities/ + when: "instack_virt_setup_result.rc != 0" + # more workaround for the SATA error RHBZ#1195882 + + - name: restart libvirtd + sudo: yes + service: name=libvirtd state=restarted + when: "instack_virt_setup_result.rc != 0" + + - name: inspect virsh capabilities + command: 'virsh capabilities' + when: "instack_virt_setup_result.rc != 0" + + - name: stop virbr0 + sudo: yes + command: ip link set virbr0 down + ignore_errors: true + when: "instack_virt_setup_result.rc != 0" + + - name: delete libvirt bridge virbr0 + sudo: yes + command: brctl delbr virbr0 + ignore_errors: true + when: "instack_virt_setup_result.rc != 0" + + - name: start default libvirt network + sudo: yes + command: virsh net-start default + ignore_errors: true + when: "instack_virt_setup_result.rc != 0" + + - name: retry run instack-virt-setup + shell: > + virsh undefine instack; + source {{ instack_user_home }}/virt-setup-env; + instack-virt-setup > {{ instack_user_home }}/instack-virt-setup-retry.log; + when: "instack_virt_setup_result.rc !=0" + + - name: print out all the VMs + shell: > + sudo virsh list --all + + - name: get undercloud vm ip address + shell: > + export PATH='/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/stack/bin'; + ip n | grep $(tripleo get-vm-mac instack) | awk '{print $1;}' + when: undercloud_ip is not defined + register: undercloud_vm_ip_result + + - name: set_fact for undercloud ip + set_fact: undercloud_ip={{ undercloud_vm_ip_result.stdout }} + +- name: set the undercloud ip as a fact + hosts: localhost + tasks: + - name: set_fact for undercloud ip + set_fact: undercloud_ip={{ hostvars['host0'].undercloud_ip }} + + - name: debug undercloud_ip + debug: var=hostvars['localhost'].undercloud_ip + +- name: add the host to the ansible inventory and setup ssh keys + hosts: virthost + tasks: + - name: wait until ssh is available on undercloud node + wait_for: + host={{ hostvars['localhost'].undercloud_ip }} + state=started + port=22 + delay=15 + timeout=300 + + - name: add undercloud host + add_host: + name=undercloud + groups=undercloud + ansible_host=undercloud + ansible_fqdn=undercloud + ansible_user="{{ provisioner.remote_user }}" + ansible_ssh_private_key_file="{{ provisioner.key_file }}" + gating_repo="{{ gating_repo is defined and gating_repo }}" + + - name: setup ssh config + template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/ssh_config.j2 dest=~/ssh.config.ansible mode=0755 + + - name: copy ssh_config back to the slave + fetch: src=~/ssh.config.ansible dest="{{ base_dir }}/khaleesi/ssh.config.ansible" flat=yes + + - name: copy id_rsa key back to the slave + fetch: src=~/.ssh/id_rsa dest="{{ base_dir }}/khaleesi/id_rsa_virt_host" flat=yes + + - name: copy undercloud root user authorized_keys to stack user + shell: 'ssh -F ssh.config.ansible undercloud-from-virthost "cp /root/.ssh/authorized_keys /home/stack/.ssh/"' + + - name: chown authorized_keys for stack user + shell: 'ssh -F ssh.config.ansible undercloud-from-virthost "chown stack:stack /home/stack/.ssh/authorized_keys"' + + +- name: regenerate the inventory file after adding hosts + hosts: localhost + tasks: + - name: create inventory from template + template: + dest: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" + src: "{{ base_dir }}/khaleesi/playbooks/provisioner/templates/inventory.j2" + + - name: symlink inventory to a static name + file: + dest: "{{ lookup('env', 'PWD') }}/hosts" + state: link + src: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" + +- name: test host connection + hosts: all:!localhost + tasks: + - name: test ssh + command: hostname + + - name: check distro + command: cat /etc/redhat-release + + - name: set fact stack user home + set_fact: instack_user_home=/home/{{ provisioner.remote_user }} diff --git a/playbooks/installer/rdo-manager/environment-setup/virthost/main.yml b/playbooks/installer/rdo-manager/environment-setup/virthost/main.yml new file mode 100644 index 000000000..5348018bf --- /dev/null +++ b/playbooks/installer/rdo-manager/environment-setup/virthost/main.yml @@ -0,0 +1,12 @@ +--- +- name: clean up rdo-manager virthost + hosts: virthost + vars: + - ansible_user: root + roles: + - { role: cleanup_nodes/rdo-manager, + when: (installer.type == "rdo-manager" and provisioner.type == "manual") + } + +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/user/main.yml host=virthost" +- include: instack-virt-setup/main.yml diff --git a/playbooks/installer/rdo-manager/gate.yml b/playbooks/installer/rdo-manager/gate.yml new file mode 100644 index 000000000..3d554bf2f --- /dev/null +++ b/playbooks/installer/rdo-manager/gate.yml @@ -0,0 +1,5 @@ +--- +- include: environment-setup/gate.yml +- include: undercloud/gate.yml +- include: images/main.yml +- include: overcloud/main.yml diff --git a/playbooks/installer/rdo-manager/heat-templates.yml b/playbooks/installer/rdo-manager/heat-templates.yml new file mode 100644 index 000000000..f2f836207 --- /dev/null +++ b/playbooks/installer/rdo-manager/heat-templates.yml @@ -0,0 +1,2 @@ +--- +- include: overcloud/heat-templates/main.yml diff --git a/playbooks/installer/rdo-manager/images.yml b/playbooks/installer/rdo-manager/images.yml new file mode 100644 index 000000000..90c447bb0 --- /dev/null +++ b/playbooks/installer/rdo-manager/images.yml @@ -0,0 +1,2 @@ +--- +- include: "{{base_dir}}/khaleesi/playbooks/installer/rdo-manager/images/main.yml" diff --git a/playbooks/installer/rdo-manager/images/README.txt b/playbooks/installer/rdo-manager/images/README.txt new file mode 100644 index 000000000..0e27685ef --- /dev/null +++ b/playbooks/installer/rdo-manager/images/README.txt @@ -0,0 +1,5 @@ +This playbook follows the documentation from tripleo as closely as possible + +http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#get-images +http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#upload-images + diff --git a/playbooks/installer/rdo-manager/images/main.yml b/playbooks/installer/rdo-manager/images/main.yml new file mode 100644 index 000000000..120b3f6aa --- /dev/null +++ b/playbooks/installer/rdo-manager/images/main.yml @@ -0,0 +1,3 @@ +--- +- include: run.yml +- include: upload.yml diff --git a/playbooks/installer/rdo-manager/images/run.yml b/playbooks/installer/rdo-manager/images/run.yml new file mode 100644 index 000000000..8642ec99f --- /dev/null +++ b/playbooks/installer/rdo-manager/images/run.yml @@ -0,0 +1,130 @@ +--- +- name: setup the undercloud + hosts: undercloud + tasks: + - name: Create overcloud_images directory + file: path={{ instack_user_home }}/overcloud_images state=directory + +- name: build images on the virthost + hosts: virthost + tasks: + - name: install python-passlib to workaround rhbz1278972 + yum: + name: python-passlib + state: present + sudo: yes + when: workarounds.rhbz1278972 is defined + + - name: setup environment vars + template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/build-img-env.j2 + dest=~/build-img-env mode=0755 + when: installer.overcloud_images | default('build') == "build" + + - name: ensure /tmp/svc-map-services is absent + file: path=/tmp/svc-map-services state=absent + sudo: yes + when: installer.overcloud_images | default('build') == "build" + + - name: Contents of build-img-env + shell: > + cat {{ instack_user_home }}/build-img-env + when: installer.overcloud_images | default('build') == "build" + + - name: Create overcloud_images directory + file: path={{ instack_user_home }}/overcloud_images state=directory + when: installer.overcloud_images | default('build') == "build" + + - name: build all the images + shell: > + source {{ instack_user_home }}/build-img-env; + pushd {{ instack_user_home }}/overcloud_images; + openstack overcloud image build --all > {{ instack_user_home }}/openstack-build-images.log + when: installer.overcloud_images | default('build') == "build" + + - name: expose errors durring DIB build + shell: cat {{ instack_user_home }}/openstack-build-images.log | grep -v liberror | grep -v libgpg-error | grep -A 1 -B 1 error + ignore_errors: true + when: installer.overcloud_images | default('build') == "build" + + - name: list the files in overcloud_images + command: ls -la {{ instack_user_home }}/overcloud_images/ + when: installer.overcloud_images | default('build') == "build" + + - name: scp the overcloud_images to the undercloud + shell: scp -r -F ssh.config.ansible {{ instack_user_home }}/overcloud_images/* \ + undercloud-from-virthost-as-stack:{{ instack_user_home }}/overcloud_images/ + when: installer.overcloud_images | default('build') == "build" + + - name: scp the openstack-build-images.log file to the undercloud + shell: scp -r -F ssh.config.ansible {{ instack_user_home }}/openstack-build-images.log \ + undercloud-from-virthost-as-stack:{{ instack_user_home }}/ + when: installer.overcloud_images | default('build') == "build" + +- name: build the images on baremetal + hosts: undercloud:&baremetal + tasks: + - name: setup environment vars + template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/build-img-env.j2 + dest=~/build-img-env mode=0755 + + - name: ensure /tmp/svc-map-services is absent + file: path=/tmp/svc-map-services state=absent + sudo: yes + when: installer.overcloud_images | default('build') == "build" + + - name: Contents of build-img-env + shell: > + cat {{ instack_user_home }}/build-img-env + when: installer.overcloud_images | default('build') == "build" + + - name: get the guest-image + get_url: > + url="{{ distro.images[distro.name][distro.full_version].remote_file_server }}{{ distro.images[distro.name][distro.full_version].guest_image_name }}" + dest=/home/stack/overcloud_images/{{ distro.images[distro.name][distro.full_version].guest_image_name }} + timeout=360 + + - name: build all the images + shell: > + source {{ instack_user_home }}/build-img-env; + pushd {{ instack_user_home }}/overcloud_images; + openstack overcloud image build --all > {{ instack_user_home }}/openstack-build-images.log + when: installer.overcloud_images | default('build') == "build" + + +- name: import images + hosts: undercloud + environment: + http_proxy: "{{ installer.http_proxy_url }}" + tasks: + - name: ensure wget is installed + yum: name=wget state=latest + sudo: yes + + - name: download the pre-built rdo-manager images + shell: > + pushd {{ instack_user_home }}/overcloud_images; + wget --quiet -c -O {{ instack_user_home }}/overcloud_images/{{ item }}.tar + "{{ installer.images.url[product.name][product.full_version][product.build][installer.images.version] }}{{ item }}.tar" + with_items: "{{ installer.images[product.full_version].files|list }}" + when: installer.overcloud_images is defined and installer.overcloud_images == "import" + + +- name: prep images for glance + hosts: undercloud + tasks: + - name: untar the overcloud images + shell: > + pushd {{ instack_user_home }}/overcloud_images; + tar -xvf "{{ item }}.tar" + with_items: "{{ installer.images[product.full_version].files|list }}" + when: installer.overcloud_images is defined and installer.overcloud_images == "import" + + - name: download the fedora-user image + get_url: url="{{ distro.images['fedora']['21'].remote_file_server }}{{ distro.images['fedora']['21'].guest_image_name }}" + dest={{ instack_user_home }}/overcloud_images/fedora-user.qcow2 + force=no + timeout=60 + + - name: list the files in overcloud_images + command: ls -la {{ instack_user_home }}/overcloud_images/ + diff --git a/playbooks/installer/rdo-manager/images/upload.yml b/playbooks/installer/rdo-manager/images/upload.yml new file mode 100644 index 000000000..65ac2d66e --- /dev/null +++ b/playbooks/installer/rdo-manager/images/upload.yml @@ -0,0 +1,12 @@ +--- +- name: upload images into glance + hosts: undercloud + tasks: + - name: list the files in overcloud_images + command: ls -la {{ instack_user_home }}/overcloud_images/ + + - name: prepare for overcloud by loading the images into glance + shell: > + source {{ instack_user_home }}/stackrc; + pushd {{ instack_user_home }}/overcloud_images; + openstack overcloud image upload diff --git a/playbooks/installer/rdo-manager/install_undercloud.yml b/playbooks/installer/rdo-manager/instack-virt-setup.yml similarity index 66% rename from playbooks/installer/rdo-manager/install_undercloud.yml rename to playbooks/installer/rdo-manager/instack-virt-setup.yml index dc9a01940..0fa5ad383 100644 --- a/playbooks/installer/rdo-manager/install_undercloud.yml +++ b/playbooks/installer/rdo-manager/instack-virt-setup.yml @@ -1,3 +1,3 @@ --- - include: "{{base_dir}}/khaleesi/playbooks/provisioner/manual/main.yml" -- include: undercloud/main.yml +- include: environment-setup/main.yml diff --git a/playbooks/installer/rdo-manager/install-undercloud.yml b/playbooks/installer/rdo-manager/install-undercloud.yml new file mode 100644 index 000000000..d80df2b84 --- /dev/null +++ b/playbooks/installer/rdo-manager/install-undercloud.yml @@ -0,0 +1,2 @@ +--- +- include: undercloud/main.yml diff --git a/playbooks/installer/rdo-manager/introspect-nodes.yml b/playbooks/installer/rdo-manager/introspect-nodes.yml new file mode 100644 index 000000000..9c73d34a8 --- /dev/null +++ b/playbooks/installer/rdo-manager/introspect-nodes.yml @@ -0,0 +1,2 @@ +--- +- include: overcloud/introspect-nodes/main.yml diff --git a/playbooks/installer/rdo-manager/main.yml b/playbooks/installer/rdo-manager/main.yml index 65d07dfbd..5d609744f 100644 --- a/playbooks/installer/rdo-manager/main.yml +++ b/playbooks/installer/rdo-manager/main.yml @@ -1,3 +1,5 @@ --- +- include: environment-setup/main.yml - include: undercloud/main.yml +- include: images/main.yml - include: overcloud/main.yml diff --git a/playbooks/installer/rdo-manager/nameserver.yml b/playbooks/installer/rdo-manager/nameserver.yml new file mode 100644 index 000000000..31d2a201e --- /dev/null +++ b/playbooks/installer/rdo-manager/nameserver.yml @@ -0,0 +1,2 @@ +--- +- include: overcloud/nameserver/main.yml diff --git a/playbooks/installer/rdo-manager/openstack-virtual-baremetal/main.yml b/playbooks/installer/rdo-manager/openstack-virtual-baremetal/main.yml new file mode 100644 index 000000000..e69de29bb diff --git a/playbooks/installer/rdo-manager/openstack-virtual-baremetal/run.yml b/playbooks/installer/rdo-manager/openstack-virtual-baremetal/run.yml new file mode 100644 index 000000000..fd9c6206e --- /dev/null +++ b/playbooks/installer/rdo-manager/openstack-virtual-baremetal/run.yml @@ -0,0 +1,44 @@ +--- +- name: Set up for custom template deploy with nova change + hosts: undercloud:&openstack_virtual_baremetal + tasks: + - name: clone openstack-virtual-baremetal repo + git: + repo=https://github.com/cybertron/openstack-virtual-baremetal/ + dest={{instack_user_home}}/openstack-virtual-baremetal + + - name: pin openstack virtual baremetal to a specific hash + shell: > + chdir={{instack_user_home}}/openstack-virtual-baremetal + git reset --hard {{ installer.custom_deploy.ovb_pin_version }} + when: installer.custom_deploy.ovb_pin_version is defined + + - name: copy tripleo-heat-templates to custom + shell: > + cp -r /usr/share/openstack-tripleo-heat-templates/ {{ instack_user_home }}/custom + + - name: add the necessary hieradata configuration + shell: > + echo "neutron::agents::ml2::ovs::firewall_driver: neutron.agent.firewall.NoopFirewallDriver" >> {{ instack_user_home }}/custom/puppet/hieradata/common.yaml + + - name: create param.ini file + local_action: shell echo "DNS_SERVER={{ hw_env.dns_server }}" > {{ base_dir }}/param.ini + + - name: check that param.ini file exists + local_action: wait_for path="{{ base_dir }}/param.ini" + + - name: add other variables to param.ini file + local_action: shell echo -e "PARENT_WORKSPACE_DIR={{ base_dir }}\nREMOTE_FILE_SERVER={{ installer.custom_deploy.image.remote_file_server }}\nIMAGE_NAME={{ installer.custom_deploy.image.name }}\nPROVISION_CIDR={{ installer.custom_deploy.host_cloud_networks.provision.cidr }}\nPRIVATE_CIDR={{ installer.custom_deploy.host_cloud_networks.private.cidr }}\nPUBLIC_CIDR={{ installer.custom_deploy.host_cloud_networks.public.cidr }}" >> {{ base_dir }}/param.ini + + - name: get ctlplane subnet uuid + register: ctlplane_subnet_uuid + shell: > + source {{ instack_user_home }}/stackrc; + neutron net-show ctlplane -f value -F subnets; + when: installer.env.type == "virthost" + + - name: update dns server on ctlplane + shell: > + source {{ instack_user_home }}/stackrc; + neutron subnet-update {{ ctlplane_subnet_uuid.stdout }} --dns_nameservers list=true {{ hw_env.dns_server }} + when: installer.env.type == "virthost" diff --git a/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/main.yml b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/main.yml new file mode 100644 index 000000000..5d4805e6a --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/main.yml @@ -0,0 +1,3 @@ +--- +- include: run.yml +- include: post.yml diff --git a/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/post.yml b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/post.yml new file mode 100644 index 000000000..299338217 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/post.yml @@ -0,0 +1,13 @@ +--- +- name: wait for nova + hosts: undercloud + tasks: + - name: wait until nova becomes aware of first bare metal instance + shell: > + source {{ instack_user_home }}/stackrc; + nova hypervisor-stats | grep ' vcpus ' | head -n1 | awk '{ print $4; }' + register: vcpu_count_single + retries: 20 + delay: 15 + until: vcpu_count_single.stdout|int > 0 + ignore_errors: true diff --git a/playbooks/installer/rdo-manager/overcloud/run-matching-ahc.yml b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run-matching-ahc.yml similarity index 97% rename from playbooks/installer/rdo-manager/overcloud/run-matching-ahc.yml rename to playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run-matching-ahc.yml index d25015e4f..03603b043 100644 --- a/playbooks/installer/rdo-manager/overcloud/run-matching-ahc.yml +++ b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run-matching-ahc.yml @@ -15,7 +15,7 @@ - name: create edeploy state file sudo: yes template: - src=templates/edeploy-state.j2 + src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/edeploy-state.j2 dest=/etc/ahc-tools/edeploy/state force=yes mode=0644 diff --git a/playbooks/installer/rdo-manager/overcloud/run-matching-basic.yml b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run-matching-basic.yml similarity index 100% rename from playbooks/installer/rdo-manager/overcloud/run-matching-basic.yml rename to playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run-matching-basic.yml diff --git a/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run.yml b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run.yml new file mode 100644 index 000000000..63e63b0ae --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/advanced-profile-matching/run.yml @@ -0,0 +1,2 @@ +--- +- include: run-matching-{{ installer.match_style | default('basic') }}.yml diff --git a/playbooks/installer/rdo-manager/overcloud/ansible-inventory.yml b/playbooks/installer/rdo-manager/overcloud/ansible-inventory.yml new file mode 100644 index 000000000..cfdb2668c --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/ansible-inventory.yml @@ -0,0 +1,75 @@ +--- +- name: Post deploy + hosts: undercloud + tasks: + - name: copy the undercloud id_rsa key back to the slave + fetch: src=~/.ssh/id_rsa dest="{{ base_dir }}/khaleesi/id_rsa_undercloud" flat=yes + + - name: copy get-overcloud-nodes.py to undercloud + template: > + src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/get-overcloud-nodes.py.j2 + dest={{ instack_user_home }}/get-overcloud-nodes.py + mode=0755 + + - name: fetch overcloud node names and IPs + shell: > + source {{ instack_user_home }}/stackrc; + python {{ instack_user_home }}/get-overcloud-nodes.py + register: overcloud_nodes + ignore_errors: yes + + - name: add each overcloud controller node to ansible + with_dict: overcloud_nodes.stdout + ignore_errors: yes + add_host: + name={{ item.key }} + groups=overcloud,controller + ansible_host={{ item.key }} + ansible_fqdn={{ item.value }} + ansible_user="heat-admin" + ansible_ssh_private_key_file="{{ lookup('env', 'PWD') }}/id_rsa_undercloud" + when: item.key.startswith('overcloud-controller') + + - name: add each overcloud compute node to ansible + with_dict: overcloud_nodes.stdout + ignore_errors: yes + add_host: + name={{ item.key }} + groups=overcloud,compute + ansible_host={{ item.key }} + ansible_fqdn={{ item.value }} + ansible_user="heat-admin" + ansible_ssh_private_key_file="{{ lookup('env', 'PWD') }}/id_rsa_undercloud" + when: item.key.startswith('overcloud-compute') + + - name: add each overcloud ceph node to ansible + with_dict: overcloud_nodes.stdout + ignore_errors: yes + add_host: + name={{ item.key }} + groups=overcloud,ceph + ansible_host={{ item.key }} + ansible_fqdn={{ item.value }} + ansible_user="heat-admin" + ansible_ssh_private_key_file="{{ lookup('env', 'PWD') }}/id_rsa_undercloud" + when: item.key.startswith('overcloud-ceph') + +- name: regenerate the inventory file after adding hosts + hosts: localhost + tasks: + - name: set_fact for undercloud ip #required for regeneration of ssh.config.ansible + set_fact: undercloud_ip={{ hostvars['undercloud']['ansible_default_ipv4']['address'] }} + + - name: create inventory from template + template: + dest: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" + src: "{{ base_dir }}/khaleesi/playbooks/provisioner/templates/inventory.j2" + + - name: symlink inventory to a static name + file: + dest: "{{ lookup('env', 'PWD') }}/hosts" + state: link + src: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" + + - name: regenerate ssh config + template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/ssh_config.j2 dest={{ base_dir }}/khaleesi/ssh.config.ansible mode=0755 diff --git a/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/main.yml b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/main.yml new file mode 100644 index 000000000..c2f1bf0cf --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/main.yml @@ -0,0 +1,4 @@ +--- +- include: pre.yml +- include: "{{ installer.deploy.type | default('templates') }}/main.yml" +- include: run.yml diff --git a/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/plan/main.yml b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/plan/main.yml new file mode 100644 index 000000000..f01787f39 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/plan/main.yml @@ -0,0 +1,2 @@ +--- +- include: run.yml diff --git a/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/plan/run.yml b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/plan/run.yml new file mode 100644 index 000000000..04d54503c --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/plan/run.yml @@ -0,0 +1,27 @@ +- name: setup deployment for a tuskar (plan) style deployment + hosts: undercloud + tasks: + - name: get plan list + shell: > + source {{ instack_user_home }}/stackrc; + openstack management plan list | grep overcloud | cut -d " " -f2 + register: overcloud_uuid_result + when: installer.deploy.type == 'plan' + + - name: set fact for openstack management plan + set_fact: + overcloud_uuid: "{{ overcloud_uuid_result.stdout }}" + when: installer.deploy.type == 'plan' + + - name: set plan values for plan based ceph deployments + shell: > + source {{ instack_user_home }}/stackrc; + source {{ instack_user_home }}/deploy-nodesrc; + if [ "$CEPHSTORAGESCALE" -gt "0" ]; then + openstack management plan set {{ overcloud_uuid }} \ + -P Controller-1::CinderEnableIscsiBackend=false \ + -P Controller-1::CinderEnableRbdBackend=true \ + -P Controller-1::GlanceBackend=rbd \ + -P Compute-1::NovaEnableRbdBackend=true; + fi + when: installer.deploy.type == 'plan' diff --git a/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/pre.yml b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/pre.yml new file mode 100644 index 000000000..3ca620990 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/pre.yml @@ -0,0 +1,46 @@ +- name: prepare for the overcloud deployment + hosts: undercloud + tasks: + - name: get ironic node ids (workaround for bz 1246641) + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-list | grep 'None' | awk '{ print $2; }' + register: ironic_node_ids + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: power off ironic nodes (workaround for bz 1246641) + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-set-power-state {{item}} 'off' + with_items: ironic_node_ids.stdout_lines + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: get number of nodes that could be used for the overcloud + shell: > + if [ -f {{ instack_user_home }}/instackenv.json ]; then + cat {{ instack_user_home }}/instackenv.json | grep -o pm_addr | wc -l + else + cat {{ installer.nodes.node_count | default('3') }} + fi + register: number_of_possible_nodes + + - name: poll for nodes to be in powered off state + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-list | grep 'power off' | wc -l + register: ironic_node_power_off + retries: 10 + until: ironic_node_power_off.stdout == number_of_possible_nodes.stdout + + - name: copy template file with environment variables for overcloud nodes + template: + src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/deploy-nodes.j2 + dest={{ instack_user_home }}/deploy-nodesrc + mode=0755 + + - name: copy template file with environment variables for overcloud deploy + template: + src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/deploy-overcloudrc.j2 + dest={{ instack_user_home }}/deploy-overcloudrc + mode=0755 + diff --git a/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/run.yml b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/run.yml new file mode 100644 index 000000000..10ee86c5a --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/run.yml @@ -0,0 +1,34 @@ +--- +- name: deploy the overcloud + hosts: undercloud + tasks: + - name: echo deploy command + shell: > + source {{ instack_user_home }}/stackrc; + source {{ instack_user_home }}/deploy-nodesrc; + source {{ instack_user_home }}/deploy-overcloudrc; + echo $DEPLOY_COMMAND + register: overcloud_deploy_command + + - name: deploy-overcloud + shell: > + source {{ instack_user_home }}/stackrc; + {{ overcloud_deploy_command.stdout }} &> overcloud_deployment_console.log + register: overcloud_deployment_result + ignore_errors: yes + + - name: echo deploy-overcloud return code + debug: var=overcloud_deployment_result.rc + + - name: heat stack-list + shell: > + source {{ instack_user_home }}/stackrc; + heat stack-list + ignore_errors: yes + + - name: overcloud deployment logs + debug: msg=" Please refer to the undercloud log file for detailed status. The deployment debug logs are stored under /home/stack" + + - name: set fact overcloud_deployment_result + set_fact: + overcloud_deployment_result: "{{ overcloud_deployment_result.rc }}" diff --git a/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/templates/main.yml b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/templates/main.yml new file mode 100644 index 000000000..f01787f39 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/templates/main.yml @@ -0,0 +1,2 @@ +--- +- include: run.yml diff --git a/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/templates/run.yml b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/templates/run.yml new file mode 100644 index 000000000..74c0f11ca --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/deploy-overcloud/templates/run.yml @@ -0,0 +1,29 @@ +- name: setup deployment for a heat templated (templates) style deployment + hosts: undercloud + tasks: + - name: echo deploy command + shell: > + source {{ instack_user_home }}/stackrc; + source {{ instack_user_home }}/deploy-nodesrc; + source {{ instack_user_home }}/deploy-overcloudrc; + echo $DEPLOY_COMMAND + register: overcloud_deploy_command + + - name: find the env files to be used in deploy + shell: > + echo {{ overcloud_deploy_command.stdout }} | grep -o -e '\-e .*yaml' | sed s'/\-e //g' | sed s'#[A-Z a-z 0-9 _ -]*\.yaml##g' + register: env_files + + - name: clone template validation tools + git: + repo=https://github.com/openstack/tripleo-heat-templates.git + dest={{instack_user_home}}/tripleo-heat-templates + + - name: validate the yaml files + shell: > + chdir={{instack_user_home}} + python tripleo-heat-templates/tools/yaml-validate.py {{ item }} + register: validate_yaml_output + with_items: env_files.stdout.split('\n') + failed_when: validate_yaml_output.stdout.find('Validation failed on') != -1 + diff --git a/playbooks/installer/rdo-manager/overcloud/flavors/README b/playbooks/installer/rdo-manager/overcloud/flavors/README new file mode 100644 index 000000000..aaa0999a6 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/flavors/README @@ -0,0 +1 @@ +#See advanced-profile-matching diff --git a/playbooks/installer/rdo-manager/overcloud/heat-templates/README b/playbooks/installer/rdo-manager/overcloud/heat-templates/README new file mode 100644 index 000000000..e69de29bb diff --git a/playbooks/installer/rdo-manager/overcloud/heat-templates/main.yml b/playbooks/installer/rdo-manager/overcloud/heat-templates/main.yml new file mode 100644 index 000000000..16ce46cee --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/heat-templates/main.yml @@ -0,0 +1,3 @@ +--- +- include: "pre-{{ installer.env.type }}.yml" +- include: run.yml diff --git a/playbooks/installer/rdo-manager/overcloud/heat-templates/pre-baremetal.yml b/playbooks/installer/rdo-manager/overcloud/heat-templates/pre-baremetal.yml new file mode 100644 index 000000000..0e9ec9118 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/heat-templates/pre-baremetal.yml @@ -0,0 +1,21 @@ +--- +- name: Copy over and modify network config template + hosts: undercloud + tasks: + - name: check that network config file exists + stat: > + path="{{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/{{ installer.network.isolation }}.yml" + when: installer.network.isolation != 'none' + + #the long line in this task fails when broken up + - name: copy over template file (baremetal) + synchronize: > + src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/{{ installer.network.isolation }}.yml + dest={{ instack_user_home }}/network-environment.yaml + when: installer.network.isolation != 'none' + + - name: copy over common environment file (baremetal) + synchronize: > + src={{base_dir}}/khaleesi-settings/hardware_environments/common/plan-parameter-neutron-bridge.yaml + dest={{ instack_user_home }}/plan-parameter-neutron-bridge.yaml + when: installer.network.isolation != 'none' and installer.deploy.type == 'plan' diff --git a/playbooks/installer/rdo-manager/overcloud/heat-templates/pre-virthost.yml b/playbooks/installer/rdo-manager/overcloud/heat-templates/pre-virthost.yml new file mode 100644 index 000000000..a5a4c1661 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/heat-templates/pre-virthost.yml @@ -0,0 +1,23 @@ +--- +- name: Copy over and modify network config template + hosts: undercloud + tasks: + - name: check that network config file exists + stat: > + path="{{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/{{ installer.network.isolation }}.yml" + when: installer.network.isolation != 'none' + + #the long line in this task fails when broken up + - name: copy over template file (virt) + copy: + src="{{ base_dir }}/khaleesi-settings/hardware_environments/{{ hw_env.env_type }}/network_configs/{{ installer.network.isolation }}/{{ installer.network.isolation }}.yml" + dest="{{ instack_user_home }}/network-environment.yaml" + when: installer.network.isolation != 'none' + + #the long line in this task fails when broken up + - name: copy over common environment file (virt) + copy: + src="{{ base_dir }}/khaleesi-settings/hardware_environments/common/plan-parameter-neutron-bridge.yaml" + dest="{{ instack_user_home }}/plan-parameter-neutron-bridge.yaml" + when: installer.network.isolation != 'none' and installer.deploy.type == 'plan' + diff --git a/playbooks/installer/rdo-manager/overcloud/heat-templates/run.yml b/playbooks/installer/rdo-manager/overcloud/heat-templates/run.yml new file mode 100644 index 000000000..ac87358ae --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/heat-templates/run.yml @@ -0,0 +1,81 @@ +--- +- name: Copy over and modify network config template + hosts: undercloud + tasks: + - name: make a nic-configs dir + file: path={{ instack_user_home }}/nic-configs state=directory + when: installer.network.isolation != 'none' + + #the long line in this task fails when broken up + - name: copy over standard nic-configs default directory + shell: > + cp /usr/share/openstack-tripleo-heat-templates/network/config/{{ installer.network.isolation | replace('_', '-') | replace("-ipv6", "") }}/*.yaml {{ instack_user_home }}/nic-configs + when: installer.network.isolation != 'none' and installer.network.isolation != 'default' + + #the long line in this task fails when broken up + - name: check if env-specific nic-configs exist + local_action: > + stat path={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/nic-configs/ + register: nic_config_dir + when: installer.network.isolation != 'none' + + #the long line in this task fails when broken up + - name: copy nic-configs saved version if available + synchronize: > + src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/nic-configs/{{ item }}.yaml + dest={{ instack_user_home }}/nic-configs + with_items: + - controller + - compute + - ceph-storage + - cinder-storage + - swift-storage + when: installer.network.isolation != 'none' and nic_config_dir.stat.exists == True + + - name: poll for files to exist + wait_for: path={{ instack_user_home }}/nic-configs/swift-storage.yaml + when: installer.network.isolation != 'none' and installer.network.isolation != 'default' + + - name: Check for additional network config files + local_action: > + shell ls "{{ base_dir }}/khaleesi-settings/hardware_environments/{{ hw_env.env_type }}/network_configs/{{ installer.network.isolation }}/" + register: nic_configs + + - debug: var=nic_configs.stdout_lines + + - name: Custom Network config for node profiles + synchronize: > + src={{ base_dir }}/khaleesi-settings/hardware_environments/{{ hw_env.env_type }}/network_configs/{{ installer.network.isolation }}/{{ item }} + dest=/home/stack/nic-configs/ + ignore_errors: yes + with_items: + - controller.yaml + - compute.yaml + - cinder-storage.yaml + - swift-storage.yaml + - ceph-storage.yaml + when: item in nic_configs.stdout_lines + + - name: create self-signed SSL cert + command: openssl req -x509 -nodes -newkey rsa:2048 -subj "/CN={{ hw_env.ExternalVIP }}" -days 3650 -keyout overcloud-privkey.pem -out overcloud-cacert.pem -extensions v3_ca + when: installer.ssl + + - name: fetch template from single remote host + tls_tht: + dest_dir: "{{ instack_user_home }}/" + cert_filename: "overcloud-cacert.pem" + cert_ca_filename: "overcloud-cacert.pem" + key_filename: "overcloud-privkey.pem" + when: installer.ssl + + - name: copy the self-signed SSL cert + shell: > + cp overcloud-cacert.pem /etc/pki/ca-trust/source/anchors/; + update-ca-trust extract; + sudo: true + when: installer.ssl + + - name: Copy default heat settings template + template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/default-overcloud-settings.j2 + dest={{ instack_user_home }}/default-overcloud-settings.yaml + mode=0755 diff --git a/playbooks/installer/rdo-manager/overcloud/introspect-nodes/main.yml b/playbooks/installer/rdo-manager/overcloud/introspect-nodes/main.yml new file mode 100644 index 000000000..f01787f39 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/introspect-nodes/main.yml @@ -0,0 +1,2 @@ +--- +- include: run.yml diff --git a/playbooks/installer/rdo-manager/overcloud/introspect-nodes/run.yml b/playbooks/installer/rdo-manager/overcloud/introspect-nodes/run.yml new file mode 100644 index 000000000..219bbaf8d --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/introspect-nodes/run.yml @@ -0,0 +1,58 @@ +--- +- name: introspect nodes + hosts: undercloud + tasks: + - name: get full list of node UUIDs + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-list | grep 'power' | awk '{print $2}' + register: ironic_node_full_list_uuid + + - name: start bulk introspection + shell: > + source {{ instack_user_home }}/stackrc; + openstack baremetal introspection bulk start; + when: installer.introspection_method == 'bulk' + + - name: introspect node by node + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-set-maintenance {{ item }} true; + openstack baremetal introspection start {{ item }}; + export STATUS=$(openstack baremetal introspection status {{ item }} | grep 'finished'); + while [[ $STATUS != *"True"* ]]; do + echo "Waiting for instrospection to complete."; + sleep 180; + export STATUS=$(openstack baremetal introspection status {{ item }} | grep 'finished'); + done; + openstack baremetal introspection status {{ item }} | grep 'error' + register: introspect_status + retries: 3 + delay: 5 + until: introspect_status.stdout.find("None") != -1 + with_items: ironic_node_full_list_uuid.stdout_lines + when: installer.introspection_method == 'node_by_node' + + - name: set maintenance status to false + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-set-maintenance {{ item }} False + with_items: ironic_node_full_list_uuid.stdout_lines + when: installer.introspection_method == 'node_by_node' + + - name: check instrospections status + register: introspection_result + retries: 45 + delay: 20 + until: introspection_result.rc == 0 + shell: | + source {{ instack_user_home }}/stackrc + OUTPUT=$(openstack baremetal introspection bulk status) + TOTAL_NODES=$(echo "$OUTPUT" | grep -E '\w{8}-\w{4}' | wc -l) + INTROSPECTED_NODES=$(echo "$OUTPUT" | grep -E ' True *\| *None ' | wc -l) + [ "$TOTAL_NODES" == "$INTROSPECTED_NODES" ] + + - name: show profile + shell: > + source {{ instack_user_home }}/stackrc; + instack-ironic-deployment --show-profile; diff --git a/playbooks/installer/rdo-manager/overcloud/main.yml b/playbooks/installer/rdo-manager/overcloud/main.yml index b85681e17..cf2ad8fa8 100644 --- a/playbooks/installer/rdo-manager/overcloud/main.yml +++ b/playbooks/installer/rdo-manager/overcloud/main.yml @@ -1,3 +1,10 @@ --- -- include: run.yml +- include: register-nodes/main.yml +- include: introspect-nodes/main.yml +- include: advanced-profile-matching/main.yml +- include: heat-templates/main.yml +#note: ovb {{ base_dir }}/khaleesi/installer/rdo-manager/installer/openstack-virtual-baremetal/main.yml +- include: nameserver/main.yml +- include: deploy-overcloud/main.yml +- include: ansible-inventory.yml - include: status.yml diff --git a/playbooks/installer/rdo-manager/overcloud/nameserver/main.yml b/playbooks/installer/rdo-manager/overcloud/nameserver/main.yml new file mode 100644 index 000000000..f01787f39 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/nameserver/main.yml @@ -0,0 +1,2 @@ +--- +- include: run.yml diff --git a/playbooks/installer/rdo-manager/overcloud/nameserver/run.yml b/playbooks/installer/rdo-manager/overcloud/nameserver/run.yml new file mode 100644 index 000000000..98c4857ec --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/nameserver/run.yml @@ -0,0 +1,22 @@ +- name: deploy the overcloud + hosts: undercloud + tasks: + - name: get subnet uuid + shell: > + source {{ instack_user_home }}/stackrc; + neutron subnet-list | grep {{ hw_env.network }} | sed -e 's/|//g' | awk '{print $1}' + register: subnet_uuid + when: hw_env.env_type is defined and hw_env.env_type in ['ovb_host_cloud', 'scale_lab'] + + - name: get nameserver + sudo: yes + shell: > + cat /etc/resolv.conf | grep -m 1 'nameserver' | sed -n -e 's/^.*nameserver //p' + register: nameserver + when: hw_env.env_type is defined and hw_env.env_type in ['ovb_host_cloud', 'scale_lab'] + + - name: configure a nameserver for the overcloud + shell: > + source {{ instack_user_home }}/stackrc; + neutron subnet-update {{ subnet_uuid.stdout }} --dns-nameserver {{ nameserver.stdout }} + when: hw_env.env_type is defined and hw_env.env_type in ['ovb_host_cloud', 'scale_lab'] diff --git a/playbooks/installer/rdo-manager/overcloud/register-nodes/main.yml b/playbooks/installer/rdo-manager/overcloud/register-nodes/main.yml new file mode 100644 index 000000000..f01787f39 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/register-nodes/main.yml @@ -0,0 +1,2 @@ +--- +- include: run.yml diff --git a/playbooks/installer/rdo-manager/overcloud/register-nodes/run.yml b/playbooks/installer/rdo-manager/overcloud/register-nodes/run.yml new file mode 100644 index 000000000..39f986bf4 --- /dev/null +++ b/playbooks/installer/rdo-manager/overcloud/register-nodes/run.yml @@ -0,0 +1,34 @@ +--- +- name: register nodes + hosts: undercloud + tasks: + - name: register bm nodes with openstack cli + shell: > + source {{ instack_user_home }}/stackrc; + openstack baremetal import --json instackenv.json; + register: register_nodes_result + retries: 10 + delay: 10 + until: register_nodes_result.rc == 0 + + - name: register bm nodes with ironic + shell: > + source {{ instack_user_home }}/stackrc; + openstack baremetal configure boot + register: register_nodes_result + retries: 10 + delay: 10 + until: register_nodes_result.rc == 0 + + - name: get nodes UUID + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-list | grep 'power' | awk '{print $2}' | tail -3 + register: ironic_node_list_uuid + + - name: update nodes with disk size hint + shell: > + source {{ instack_user_home }}/stackrc; + ironic node-update {{ item }} add properties/root_device='{"size": {{ hw_env.disk_root_device_size | int }}}' + with_items: ironic_node_list_uuid.stdout_lines + when: (hw_env is defined) and (hw_env.disk_root_device_size is defined) and product.full_version == '8-director' diff --git a/playbooks/installer/rdo-manager/overcloud/run.yml b/playbooks/installer/rdo-manager/overcloud/run.yml deleted file mode 100644 index a5bbc82c2..000000000 --- a/playbooks/installer/rdo-manager/overcloud/run.yml +++ /dev/null @@ -1,330 +0,0 @@ ---- -- name: register and discover nodes - hosts: undercloud - tasks: - - name: register bm nodes with openstack cli - register: register_nodes_result - retries: 10 - delay: 10 - until: register_nodes_result.rc == 0 - shell: > - source {{ instack_user_home }}/stackrc; - openstack baremetal import --json instackenv.json; - - - - name: register bm nodes with ironic - register: register_nodes_result - retries: 10 - delay: 10 - until: register_nodes_result.rc == 0 - shell: > - source {{ instack_user_home }}/stackrc; - openstack baremetal configure boot - - - name: get nodes UUID - shell: > - source {{ instack_user_home }}/stackrc; - ironic node-list | grep 'power' | awk '{print $2}' | tail -3 - register: ironic_node_list_uuid - - - name: update nodes with disk size hint - shell: > - source {{ instack_user_home }}/stackrc; - ironic node-update {{ item }} add properties/root_device='{"size": {{ hw_env.disk_root_device_size | int }}}' - with_items: ironic_node_list_uuid.stdout_lines - when: (hw_env is defined) and (hw_env.disk_root_device_size is defined) and product.full_version == '8-director' - - - name: introspect nodes - shell: > - source {{ instack_user_home }}/stackrc; - openstack baremetal introspection bulk start; - - - name: check instrospections status - register: introspection_result - retries: 45 - delay: 20 - until: introspection_result.rc == 0 - shell: | - source {{ instack_user_home }}/stackrc - OUTPUT=$(openstack baremetal introspection bulk status) - TOTAL_NODES=$(echo "$OUTPUT" | grep -E '\w{8}-\w{4}' | wc -l) - INTROSPECTED_NODES=$(echo "$OUTPUT" | grep -E ' True *\| *None ' | wc -l) - [ "$TOTAL_NODES" == "$INTROSPECTED_NODES" ] - - - name: show profile - shell: > - source {{ instack_user_home }}/stackrc; - instack-ironic-deployment --show-profile; - -- include: run-matching-{{ installer.match_style | default('basic') }}.yml - -- name: wait for nova - hosts: undercloud - tasks: - - name: wait until nova becomes aware of first bare metal instance - register: vcpu_count_single - retries: 20 - delay: 15 - until: vcpu_count_single.stdout|int > 0 - ignore_errors: true - shell: > - source {{ instack_user_home }}/stackrc; - nova hypervisor-stats | grep ' vcpus ' | head -n1 | awk '{ print $4; }' - -- name: Copy over and modify network config template - hosts: undercloud - tasks: - - name: check that network config file exists - stat: path="{{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/{{ installer.network.isolation }}.yml" - when: installer.network.isolation != 'none' - - - name: copy over template file (baremetal) - synchronize: > - src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/{{ installer.network.isolation }}.yml dest={{ instack_user_home }}/network-environment.yaml - when: installer.network.isolation != 'none' and installer.env.type != "virthost" - - - name: copy over template file (virt) - local_action: shell pushd {{ base_dir }}/khaleesi; rsync --delay-updates -F --compress --archive --rsh "ssh -F ssh.config.ansible -S none -o StrictHostKeyChecking=no" {{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/{{ installer.network.isolation }}.yml undercloud:{{ instack_user_home }}/network-environment.yaml - when: installer.network.isolation != 'none' and installer.env.type == "virthost" - - - name: copy over common environment file (baremetal) - synchronize: > - src={{base_dir}}/khaleesi-settings/hardware_environments/common/plan-parameter-neutron-bridge.yaml dest={{ instack_user_home }}/plan-parameter-neutron-bridge.yaml - when: installer.network.isolation != 'none' and installer.env.type != "virthost" and installer.deploy.type == 'plan' - - - name: copy over common environment file (virt) - local_action: shell pushd {{ base_dir }}/khaleesi; rsync --delay-updates -F --compress --archive --rsh "ssh -F ssh.config.ansible -S none -o StrictHostKeyChecking=no" {{base_dir}}/khaleesi-settings/hardware_environments/common/plan-parameter-neutron-bridge.yaml undercloud:{{ instack_user_home }}/plan-parameter-neutron-bridge.yaml - when: installer.network.isolation != 'none' and installer.env.type == "virthost" and installer.deploy.type == 'plan' - - - name: make a nic-configs dir - shell: > - mkdir {{ instack_user_home }}/nic-configs - when: installer.network.isolation != 'none' - - - name: copy over standard nic-configs default directory - shell: > - cp /usr/share/openstack-tripleo-heat-templates/network/config/{{ installer.network.isolation | replace('_', '-') }}/*.yaml {{ instack_user_home }}/nic-configs - when: installer.network.isolation != 'none' and installer.network.isolation != 'default' - - - name: check if env-specific nic-configs exist - local_action: stat path={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/nic-configs/ - register: nic_config_dir - when: installer.network.isolation != 'none' and installer.env.type != 'virthost' - - - name: copy nic-configs saved version if available - synchronize: > - src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/network_configs/{{ installer.network.isolation }}/nic-configs/{{ item }}.yaml - dest={{ instack_user_home }}/nic-configs - with_items: - - controller - - compute - - ceph-storage - - cinder-storage - - swift-storage - when: installer.network.isolation != 'none' and installer.env.type != "virthost" and nic_config_dir.stat.exists == True - - - name: poll for files to exist - wait_for: path={{ instack_user_home }}/nic-configs/swift-storage.yaml - when: installer.network.isolation != 'none' and installer.network.isolation != 'default' - - - name: Check for additional network config files - local_action: shell ls "{{ base_dir }}/khaleesi-settings/hardware_environments/{{ hw_env.env_type }}/network_configs/{{ installer.network.isolation }}/" - register: nic_configs - - - debug: var=nic_configs.stdout_lines - - - name: Custom Network config for node profiles - synchronize: > - src={{ base_dir }}/khaleesi-settings/hardware_environments/{{ hw_env.env_type }}/network_configs/{{ installer.network.isolation }}/{{ item }} dest=/home/stack/nic-configs/ - ignore_errors: yes - with_items: - - controller.yaml - - compute.yaml - - cinder-storage.yaml - - swift-storage.yaml - - ceph-storage.yaml - when: item in nic_configs.stdout_lines - -- name: Set up for custom template deploy with nova change - hosts: undercloud:&openstack_virtual_baremetal - tasks: - - name: clone openstack-virtual-baremetal repo - git: - repo=https://github.com/cybertron/openstack-virtual-baremetal/ - dest={{instack_user_home}}/openstack-virtual-baremetal - - - name: pin openstack virtual baremetal to a specific hash - shell: > - chdir={{instack_user_home}}/openstack-virtual-baremetal - git reset --hard {{ installer.custom_deploy.ovb_pin_version }} - when: installer.custom_deploy.ovb_pin_version is defined - - - name: copy tripleo-heat-templates to custom - shell: > - cp -r /usr/share/openstack-tripleo-heat-templates/ {{ instack_user_home }}/custom - - - name: add the necessary hieradata configuration - shell: > - echo "neutron::agents::ml2::ovs::firewall_driver: neutron.agent.firewall.NoopFirewallDriver" >> {{ instack_user_home }}/custom/puppet/hieradata/common.yaml - - - name: create param.ini file - local_action: shell echo "DNS_SERVER={{ hw_env.dns_server }}" > {{ base_dir }}/param.ini - - - name: check that param.ini file exists - local_action: wait_for path="{{ base_dir }}/param.ini" - - - name: add other variables to param.ini file - local_action: shell echo -e "PARENT_WORKSPACE_DIR={{ base_dir }}\nREMOTE_FILE_SERVER={{ installer.custom_deploy.image.remote_file_server }}\nIMAGE_NAME={{ installer.custom_deploy.image.name }}\nPROVISION_CIDR={{ installer.custom_deploy.host_cloud_networks.provision.cidr }}\nPRIVATE_CIDR={{ installer.custom_deploy.host_cloud_networks.private.cidr }}\nPUBLIC_CIDR={{ installer.custom_deploy.host_cloud_networks.public.cidr }}" >> {{ base_dir }}/param.ini - - - name: get ctlplane subnet uuid - register: ctlplane_subnet_uuid - shell: > - source {{ instack_user_home }}/stackrc; - neutron net-show ctlplane -f value -F subnets; - when: installer.env.type == "virthost" - - - name: update dns server on ctlplane - shell: > - source {{ instack_user_home }}/stackrc; - neutron subnet-update {{ ctlplane_subnet_uuid.stdout }} --dns_nameservers list=true {{ hw_env.dns_server }} - when: installer.env.type == "virthost" - -- name: deploy the overcloud - hosts: undercloud - tasks: - - name: get ironic node ids (workaround for bz 1246641) - shell: > - source {{ instack_user_home }}/stackrc; - ironic node-list | grep 'None' | awk '{ print $2; }' - register: ironic_node_ids - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: power off ironic nodes (workaround for bz 1246641) - shell: > - source {{ instack_user_home }}/stackrc; - ironic node-set-power-state {{item}} 'off' - with_items: ironic_node_ids.stdout_lines - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: get number of nodes that could be used for the overcloud - shell: > - if [ -f {{ instack_user_home }}/instackenv.json ]; then - cat {{ instack_user_home }}/instackenv.json | grep -o pm_addr | wc -l - else - cat {{ installer.nodes.node_count | default('3') }} - fi - register: number_of_possible_nodes - - - name: poll for nodes to be in powered off state - register: ironic_node_power_off - retries: 10 - shell: > - source {{ instack_user_home }}/stackrc; - ironic node-list | grep 'power off' | wc -l - until: ironic_node_power_off.stdout == number_of_possible_nodes.stdout - - - name: get subnet uuid - shell: > - source {{ instack_user_home }}/stackrc; - neutron subnet-list | grep {{ hw_env.network }} | sed -e 's/|//g' | awk '{print $1}' - register: subnet_uuid - when: hw_env.env_type is defined and hw_env.env_type in ['ovb_host_cloud', 'scale_lab'] - - - name: get nameserver - shell: > - cat /etc/resolv.conf | grep -m 1 'nameserver' | sed -n -e 's/^.*nameserver //p' - register: nameserver - sudo: yes - when: hw_env.env_type is defined and hw_env.env_type in ['ovb_host_cloud', 'scale_lab'] - - - name: configure a nameserver for the overcloud - shell: > - source {{ instack_user_home }}/stackrc; - neutron subnet-update {{ subnet_uuid.stdout }} --dns-nameserver {{ nameserver.stdout }} - when: hw_env.env_type is defined and hw_env.env_type in ['ovb_host_cloud', 'scale_lab'] - - - - name: get plan list - register: overcloud_uuid_result - shell: > - source {{ instack_user_home }}/stackrc; - openstack management plan list | grep overcloud | cut -d " " -f2 - - - name: set fact for openstack management plan - set_fact: - overcloud_uuid: "{{ overcloud_uuid_result.stdout }}" - - - name: copy template file with environment variables for overcloud nodes - template: - src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/deploy-nodes.j2 - dest={{ instack_user_home }}/deploy-nodesrc - mode=0755 - - - name: copy template file with environment variables for overcloud deploy - template: - src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/deploy-overcloudrc.j2 - dest={{ instack_user_home }}/deploy-overcloudrc - mode=0755 - - - name: set plan values for plan based ceph deployments - shell: > - source {{ instack_user_home }}/stackrc; - source {{ instack_user_home }}/deploy-nodesrc; - if [ "$CEPHSTORAGESCALE" -gt "0" ]; then - openstack management plan set {{ overcloud_uuid }} \ - -P Controller-1::CinderEnableIscsiBackend=false \ - -P Controller-1::CinderEnableRbdBackend=true \ - -P Controller-1::GlanceBackend=rbd \ - -P Compute-1::NovaEnableRbdBackend=true; - fi - when: installer.deploy.type == 'plan' - - - name: echo deploy command - register: overcloud_deploy_command - shell: > - source {{ instack_user_home }}/stackrc; - source {{ instack_user_home }}/deploy-nodesrc; - source {{ instack_user_home }}/deploy-overcloudrc; - echo $DEPLOY_COMMAND - - - name: find the env files to be used in deploy - register: env_files - shell: > - echo {{ overcloud_deploy_command.stdout }} | grep -o -e '\-e .*yaml' | sed s'/\-e //g' | sed s'#[A-Z a-z 0-9 _ -]*\.yaml##g' - - - name: clone template validation tools - git: - repo=https://github.com/openstack/tripleo-heat-templates.git - dest={{instack_user_home}}/tripleo-heat-templates - - - name: validate the yaml files - shell: > - chdir={{instack_user_home}} - python tripleo-heat-templates/tools/yaml-validate.py {{ item }} - register: validate_yaml_output - failed_when: validate_yaml_output.stdout.find('Validation failed on') != -1 - with_items: env_files.stdout.split('\n') - - - name: deploy-overcloud - register: overcloud_deployment_result - ignore_errors: yes - shell: > - source {{ instack_user_home }}/stackrc; - {{ overcloud_deploy_command.stdout }} &> overcloud_deployment_console.log - - - name: echo deploy-overcloud return code - debug: var=overcloud_deployment_result.rc - - - name: heat stack-list - ignore_errors: yes - shell: > - source {{ instack_user_home }}/stackrc; - heat stack-list - - - name: overcloud deployment logs - debug: msg=" Please refer to the undercloud log file for detailed status. The deployment debug logs are stored under /home/stack" - - - name: set fact overcloud_deployment_result - set_fact: - overcloud_deployment_result: "{{ overcloud_deployment_result.rc }}" - diff --git a/playbooks/installer/rdo-manager/overcloud/status.yml b/playbooks/installer/rdo-manager/overcloud/status.yml index fff46ad6a..8b389e5a6 100644 --- a/playbooks/installer/rdo-manager/overcloud/status.yml +++ b/playbooks/installer/rdo-manager/overcloud/status.yml @@ -2,6 +2,10 @@ - name: Post deploy hosts: undercloud tasks: + - name: set fact overcloud_deployment_result + set_fact: + overcloud_deployment_result: "{{ overcloud_deployment_result | default('1') }}" + - name: echo deploy-overcloud return code in status playbook debug: var=overcloud_deployment_result @@ -18,84 +22,51 @@ openstack server list; - name: heat debug deploy-overcloud failure - when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" ignore_errors: yes shell: > source {{ instack_user_home }}/stackrc; heat resource-list overcloud; heat event-list overcloud; + when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" - name: debug deploy-overcloud failure - when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" ignore_errors: yes shell: > source {{ instack_user_home }}/stackrc; heat resource-show overcloud ControllerNodesPostDeployment; + when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" - name: debug all deployment failures - when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" ignore_errors: yes shell: > source {{ instack_user_home }}/stackrc; - for failed_deployment in $(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep 'StructuredDeployment ' | cut -d '|' -f3); do heat deployment-show $failed_deployment; done; + for failed_deployment in $(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep 'StructuredDeployment ' | cut -d '|' -f3); \ + do heat deployment-show $failed_deployment; done; + when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" - name: grep for errors in heat-engine.log when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" sudo: yes command: "grep ERROR /var/log/heat/heat-engine.log" + ignore_errors: yes + + - name: grep for errors in the ironic logs + when: overcloud_deployment_result is defined and overcloud_deployment_result != "0" + sudo: yes + shell: "cat /var/log/ironic/* | grep -v ERROR_FOR_DIVISION_BY_ZERO | grep ERROR" + ignore_errors: yes - name: show ironic nodes create template - template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/show_nodes.sh dest={{ instack_user_home }}/show_nodes.sh mode=0755 + template: > + src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/show_nodes.sh + dest={{ instack_user_home }}/show_nodes.sh + mode=0755 when: overcloud_deployment_result is defined and overcloud_deployment_result == "0" - name: show ironic nodes shell: "{{ instack_user_home }}/show_nodes.sh" when: overcloud_deployment_result is defined and overcloud_deployment_result == "0" - - name: copy the undercloud id_rsa key back to the slave - fetch: src=~/.ssh/id_rsa dest="{{ base_dir }}/khaleesi/id_rsa_undercloud" flat=yes - - - name: copy get-overcloud-nodes.py to undercloud - template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/get-overcloud-nodes.py.j2 dest={{ instack_user_home }}/get-overcloud-nodes.py mode=0755 - - - name: fetch overcloud node names and IPs - register: overcloud_nodes - ignore_errors: yes - shell: > - source {{ instack_user_home }}/stackrc; - python {{ instack_user_home }}/get-overcloud-nodes.py - - - name: add each overcloud node to ansible - with_dict: overcloud_nodes.stdout - ignore_errors: yes - add_host: - name={{ item.key }} - groups=overcloud - ansible_ssh_host={{ item.key }} - ansible_fqdn={{ item.value }} - ansible_ssh_user="heat-admin" - ansible_ssh_private_key_file="{{ lookup('env', 'PWD') }}/id_rsa_undercloud" - -- name: regenerate the inventory file after adding hosts - hosts: localhost - tasks: - - name: set_fact for undercloud ip #required for regeneration of ssh.config.ansible - set_fact: undercloud_ip={{ hostvars['undercloud']['ansible_default_ipv4']['address'] }} - - - name: create inventory from template - template: - dest: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" - src: "{{ base_dir }}/khaleesi/playbooks/provisioner/templates/inventory.j2" - - - name: symlink inventory to a static name - file: - dest: "{{ lookup('env', 'PWD') }}/hosts" - state: link - src: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" - - - name: regenerate ssh config - template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/ssh_config.j2 dest={{ base_dir }}/khaleesi/ssh.config.ansible mode=0755 - - name: debug output from the overcloud controller hosts: overcloud-controller-0 gather_facts: no @@ -105,7 +76,7 @@ command: ceph status ignore_errors: yes -- name: dump puppet apply logs into /var/log for collection +- name: dump journal logs into /var/log for collection hosts: overcloud gather_facts: no tasks: @@ -114,6 +85,11 @@ shell: journalctl -u os-collect-config > /var/log/os-collect-config.log ignore_errors: yes + - name: get ironic logs + sudo: yes + shell: journalctl -u openstack-ironic-conductor -u openstack-ironic-api > /var/log/ironic-conductor-api-journal.log + ignore_errors: yes + - name: fail playbook when instack-deploy-overcloud fails hosts: undercloud tasks: diff --git a/playbooks/installer/rdo-manager/register-nodes.yml b/playbooks/installer/rdo-manager/register-nodes.yml new file mode 100644 index 000000000..1358ee44d --- /dev/null +++ b/playbooks/installer/rdo-manager/register-nodes.yml @@ -0,0 +1,2 @@ +--- +- include: overcloud/register-nodes/main.yml diff --git a/playbooks/installer/rdo-manager/templates/build-img-env.j2 b/playbooks/installer/rdo-manager/templates/build-img-env.j2 index 141de84bb..78326b530 100644 --- a/playbooks/installer/rdo-manager/templates/build-img-env.j2 +++ b/playbooks/installer/rdo-manager/templates/build-img-env.j2 @@ -1,8 +1,11 @@ export DIB_LOCAL_IMAGE={{ distro.images[distro.name][distro.full_version].guest_image_name }} +{% if product.name == 'rdo' %} +export RDO_RELEASE={{ product.full_version }} +{%endif %} + {% if product.repo_type is defined and product.repo_type in ["poodle", "puddle"] %} export DIB_YUM_REPO_CONF="{{installer.dib_dir}}/rhos-release-{{product.repo.core_product_version}}-director.repo {{installer.dib_dir}}/rhos-release-{{product.repo.core_product_version}}.repo {{installer.dib_dir}}/rhos-release-rhel-{{distro.full_version}}.repo" -export DIB_LOCAL_IMAGE={{ distro.images[distro.name][ansible_distribution_version].guest_image_name }} export USE_DELOREAN_TRUNK=0 export NODE_DIST=rhel7 export RUN_RHOS_RELEASE=1 @@ -32,3 +35,7 @@ export DELOREAN_TRUNK_REPO="{{ product.repo['delorean'][ansible_distribution][di export DELOREAN_REPO_FILE="{{ product.repo.delorean.repo_file }}" export NODE_DIST=centos7 {%endif %} + +{% if installer.proxy != 'none' %} +export http_proxy={{ installer.http_proxy_url }} +{%endif %} diff --git a/playbooks/installer/rdo-manager/templates/default-overcloud-settings.j2 b/playbooks/installer/rdo-manager/templates/default-overcloud-settings.j2 new file mode 100644 index 000000000..961da0c1e --- /dev/null +++ b/playbooks/installer/rdo-manager/templates/default-overcloud-settings.j2 @@ -0,0 +1,3 @@ +parameters: + CinderLVMLoopDeviceSize: 10000 + diff --git a/playbooks/installer/rdo-manager/templates/deploy-overcloudrc.j2 b/playbooks/installer/rdo-manager/templates/deploy-overcloudrc.j2 index 993492caa..3854a2577 100644 --- a/playbooks/installer/rdo-manager/templates/deploy-overcloudrc.j2 +++ b/playbooks/installer/rdo-manager/templates/deploy-overcloudrc.j2 @@ -5,8 +5,6 @@ export DEPLOY_COMMAND="openstack overcloud deploy --debug \ {{ installer.deploy.command }} \ {{ installer.custom_deploy.command }} \ --libvirt-type=$OVERCLOUD_LIBVIRT_TYPE \ - --neutron-network-type {{ installer.network.variant }} \ - --neutron-tunnel-types {{ installer.network.variant }} \ --ntp-server {{ distro.config.ntp_server_ip }} \ --control-scale $CONTROLSCALE \ --compute-scale $COMPUTESCALE \ @@ -19,6 +17,14 @@ export DEPLOY_COMMAND="openstack overcloud deploy --debug \ --block-storage-flavor $BLOCKSTORAGEFLAVOR \ --swift-storage-flavor $SWIFTSTORAGEFLAVOR" +{% if installer.network.variant == 'vlan' %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND --neutron-network-type vlan \ + --neutron-disable-tunneling" +{% else %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND --neutron-network-type {{ installer.network.variant }} \ + --neutron-tunnel-types {{ installer.network.variant }}" +{% endif %} + {% if installer.env.type != "virthost" %} export NEUTRON_PUBLIC_INTERFACE={{ hw_env.neutron_public_interface }} export DEPLOY_COMMAND="$DEPLOY_COMMAND --neutron-public-interface=$NEUTRON_PUBLIC_INTERFACE " @@ -27,21 +33,35 @@ export DEPLOY_COMMAND="$DEPLOY_COMMAND --neutron-public-interface=$NEUTRON_PUBLI export DEPLOY_TIMEOUT={{ hw_env.deploy_timeout | default('90') }} export DEPLOY_COMMAND="$DEPLOY_COMMAND --timeout=$DEPLOY_TIMEOUT " -{% if installer.network.isolation != 'none' and installer.env.type != "virthost" %} -export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ - -e ~/network-environment.yaml " +{% if installer.network.isolation != 'none' and installer.network.protocol == "ipv4" %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml" +{% endif %} + +{% if installer.network.isolation != 'none' and installer.network.protocol == "ipv6" %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation-v6.yaml" +{% endif %} + +{% if installer.network.isolation != 'none' and installer.env.type == "virthost" and installer.network.protocol == "ipv4" %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml" {% endif %} -{% if installer.network.isolation != 'none' and installer.env.type == "virthost" %} -export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ - -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ - -e ~/network-environment.yaml " +{% if installer.network.isolation != 'none' and installer.env.type == "virthost" and installer.network.protocol == "ipv6" %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans-v6.yaml" +{% endif %} + +{% if installer.network.isolation != 'none' %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND -e ~/network-environment.yaml" {% endif %} {% if installer.network.isolation != 'none' and installer.deploy.type == 'plan' %} export DEPLOY_COMMAND="$DEPLOY_COMMAND -e ~/plan-parameter-neutron-bridge.yaml " {% endif %} +{% if installer.ssl == True %} +export DEPLOY_COMMAND="$DEPLOY_COMMAND -e ~/enable-tls.yaml \ + -e ~/inject-trust-anchor.yaml " +{% endif %} + {% if installer.deploy.type == 'templates' and product.build is defined and product.build != 'ga' %} if [ "$CEPHSTORAGESCALE" -gt "0" ]; then export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml" @@ -55,3 +75,5 @@ export DEPLOY_COMMAND="$DEPLOY_COMMAND -e ~/openstack-virtual-baremetal/template {% if product.full_version != '7-director' %} export DEPLOY_COMMAND="$DEPLOY_COMMAND -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml" {% endif %} + +export DEPLOY_COMMAND="$DEPLOY_COMMAND -e ~/default-overcloud-settings.yaml" diff --git a/playbooks/installer/rdo-manager/overcloud/templates/edeploy-state.j2 b/playbooks/installer/rdo-manager/templates/edeploy-state.j2 similarity index 100% rename from playbooks/installer/rdo-manager/overcloud/templates/edeploy-state.j2 rename to playbooks/installer/rdo-manager/templates/edeploy-state.j2 diff --git a/playbooks/installer/rdo-manager/templates/rpm.macros.proxy.j2 b/playbooks/installer/rdo-manager/templates/rpm.macros.proxy.j2 new file mode 100644 index 000000000..6d666b999 --- /dev/null +++ b/playbooks/installer/rdo-manager/templates/rpm.macros.proxy.j2 @@ -0,0 +1,2 @@ +%_httpproxy {{ installer.http_proxy_host }} +%_httpport {{ installer.http_proxy_port }} diff --git a/playbooks/installer/rdo-manager/templates/ssh_config.j2 b/playbooks/installer/rdo-manager/templates/ssh_config.j2 index 30cf606ff..560320d20 100644 --- a/playbooks/installer/rdo-manager/templates/ssh_config.j2 +++ b/playbooks/installer/rdo-manager/templates/ssh_config.j2 @@ -1,13 +1,13 @@ {% if groups["virthost"] is defined %} Host undercloud-root - ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['virthost'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['virthost'][0]].ansible_ssh_host }} -W {{ hostvars['localhost'].undercloud_ip }}:22 + ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['virthost'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['virthost'][0]].ansible_host }} -W {{ hostvars['localhost'].undercloud_ip }}:22 IdentityFile id_rsa_virt_host User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host undercloud - ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['virthost'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['virthost'][0]].ansible_ssh_host }} -W {{ hostvars['localhost'].undercloud_ip }}:22 + ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['virthost'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['virthost'][0]].ansible_host }} -W {{ hostvars['localhost'].undercloud_ip }}:22 IdentityFile id_rsa_virt_host User stack StrictHostKeyChecking no @@ -19,11 +19,18 @@ Host undercloud-from-virthost IdentitiesOnly yes User root StrictHostKeyChecking no + +Host undercloud-from-virthost-as-stack + Hostname {{ hostvars['localhost'].undercloud_ip }} + IdentityFile ~/.ssh/id_rsa + IdentitiesOnly yes + User stack + StrictHostKeyChecking no {%endif %} {% if groups["virthost"] is not defined and hw_env is defined and hw_env.env_type != "ovb_host_cloud" %} -Host {{ hostvars[groups['undercloud'][0]].ansible_ssh_host }} - Hostname {{ hostvars[groups['undercloud'][0]].ansible_ssh_host }} +Host {{ hostvars[groups['undercloud'][0]].ansible_host }} + Hostname {{ hostvars[groups['undercloud'][0]].ansible_host }} IdentityFile ~/.ssh/id_rsa IdentitiesOnly yes User root @@ -32,14 +39,14 @@ Host {{ hostvars[groups['undercloud'][0]].ansible_ssh_host }} {% if groups["virthost"] is not defined and hw_env is defined and hw_env.env_type == "ovb_host_cloud" %} Host undercloud-root - ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['provisioned'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['provisioned'][0]].ansible_ssh_host }} -W {{ floating_ip }}:22 + ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['provisioned'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['provisioned'][0]].ansible_host }} -W {{ floating_ip }}:22 IdentityFile {{ base_dir }}/khaleesi/id_rsa_undercloud_instance User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host undercloud - ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['provisioned'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['provisioned'][0]].ansible_ssh_host }} -W {{ floating_ip }}:22 + ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i {{ hostvars[groups['provisioned'][0]].ansible_ssh_private_key_file }} stack@{{ hostvars[groups['provisioned'][0]].ansible_host }} -W {{ floating_ip }}:22 IdentityFile {{ base_dir }}/khaleesi/id_rsa_undercloud_instance User stack StrictHostKeyChecking no @@ -56,7 +63,7 @@ Host undercloud-from-baremetal-host {% if groups["overcloud"] is defined %} {% for host in groups["overcloud"] %} Host {{ host }} - ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i ~/.ssh/id_rsa -F ssh.config.ansible {{ hostvars[groups['undercloud'][0]].ansible_ssh_host }} -W {{ hostvars[host].ansible_fqdn }}:22 + ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -i ~/.ssh/id_rsa -F ssh.config.ansible {{ hostvars[groups['undercloud'][0]].ansible_host }} -W {{ hostvars[host].ansible_fqdn }}:22 IdentityFile id_rsa_undercloud IdentitiesOnly yes User heat-admin diff --git a/playbooks/installer/rdo-manager/templates/virt-setup-env.j2 b/playbooks/installer/rdo-manager/templates/virt-setup-env.j2 index 3245f0bad..e929c8715 100644 --- a/playbooks/installer/rdo-manager/templates/virt-setup-env.j2 +++ b/playbooks/installer/rdo-manager/templates/virt-setup-env.j2 @@ -39,10 +39,18 @@ export NODE_MEM={{ installer.nodes.node_mem | default('4096') }} export NODE_CPU={{ installer.nodes.node_cpu | default('1') }} {%endif %} +{% if installer.nodes.node_disk is defined %} +export NODE_DISK={{ installer.nodes.node_disk | default('50') }} +{%endif %} + {% if installer.nodes.undercloud_node_mem is defined %} export UNDERCLOUD_NODE_MEM={{ installer.nodes.undercloud_node_mem | default('4096') }} {%endif %} +{% if installer.nodes.undercloud_node_cpu is defined %} +export UNDERCLOUD_NODE_CPU={{ installer.nodes.undercloud_node_cpu | default('1') }} +{%endif %} + {% if product.full_version == "7-director" and installer.network.isolation != "none" %} export TESTENV_ARGS=" --baremetal-bridge-names 'brbm' --vlan-trunk-ids='10 20 30 40 50'" {%endif %} @@ -50,3 +58,7 @@ export TESTENV_ARGS=" --baremetal-bridge-names 'brbm' --vlan-trunk-ids='10 20 30 {% if product.full_version != "7-director" and installer.network.isolation != "none" %} export TESTENV_ARGS=" --baremetal-bridge-names 'brbm' " {%endif %} + +{% if installer.proxy != 'none' %} +export http_proxy={{ installer.http_proxy_url }} +{%endif %} diff --git a/playbooks/installer/rdo-manager/undercloud/README.txt b/playbooks/installer/rdo-manager/undercloud/README.txt new file mode 100644 index 000000000..cf27d1638 --- /dev/null +++ b/playbooks/installer/rdo-manager/undercloud/README.txt @@ -0,0 +1,3 @@ +This playbook follows the documentation from tripleo as closely as possible + +http://docs.openstack.org/developer/tripleo-docs/installation/installation.html#installing-the-undercloud diff --git a/playbooks/installer/rdo-manager/undercloud/build-images.yml b/playbooks/installer/rdo-manager/undercloud/build-images.yml deleted file mode 100644 index d94d00c38..000000000 --- a/playbooks/installer/rdo-manager/undercloud/build-images.yml +++ /dev/null @@ -1,51 +0,0 @@ ---- -- name: build or import images - hosts: undercloud - tasks: - - name: setup environment vars - template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/build-img-env.j2 dest=~/build-img-env mode=0755 - - - name: ensure /tmp/svc-map-services is absent - file: path=/tmp/svc-map-services state=absent - sudo: yes - when: installer.overcloud_images | default('build') == "build" - - - name: Contents of build-img-env - shell: > - cat {{ instack_user_home }}/build-img-env - - - name: build all the images - shell: > - source {{ instack_user_home }}/build-img-env; - openstack overcloud image build --all > {{ instack_user_home }}/openstack-build-images.log - when: installer.overcloud_images | default('build') == "build" - - - name: ensure wget is installed - yum: name=wget state=latest - sudo: yes - - - name: download the pre-built rdo-manager images - shell: > - wget --quiet -c -O {{ instack_user_home }}/{{ item }}.tar - "{{ installer.images.url[product.name][product.full_version][product.build][installer.images.version] }}{{ item }}.tar" - with_items: "{{ installer.images[product.full_version].files|list }}" - when: installer.overcloud_images is defined and installer.overcloud_images == "import" - -- name: prep and upload images into glance - hosts: undercloud - tasks: - - name: untar the overcloud images - shell: tar -xvf "{{ instack_user_home }}/{{ item }}.tar" - with_items: "{{ installer.images[product.full_version].files|list }}" - when: installer.overcloud_images is defined and installer.overcloud_images == "import" - - - name: download the fedora-user image - get_url: url="{{ distro.images['fedora']['21'].remote_file_server }}{{ distro.images['fedora']['21'].guest_image_name }}" - dest={{ instack_user_home }}/fedora-user.qcow2 - force=no - timeout=60 - - - name: prepare for overcloud by loading the images into glance - shell: > - source {{ instack_user_home }}/stackrc; - openstack overcloud image upload diff --git a/playbooks/installer/rdo-manager/undercloud/gate.yml b/playbooks/installer/rdo-manager/undercloud/gate.yml new file mode 100644 index 000000000..577333e86 --- /dev/null +++ b/playbooks/installer/rdo-manager/undercloud/gate.yml @@ -0,0 +1,42 @@ +--- +- name: Group all hosts in gate if we are gating using delorean + hosts: all + tasks: + - group_by: key=gate-delorean + when: use_delorean is defined and use_delorean + +- name: Run Delorean + hosts: virthost:&gate-delorean + roles: + - delorean + +- name: Create local repo for delorean rpms + hosts: undercloud:&gate-delorean + roles: + - delorean_rpms + +- name: Group all hosts in gate if we are gating + hosts: all + tasks: + - group_by: key=gate-install-rpm + when: gating_repo is defined + +- name: Install the custom rpm when gating + hosts: undercloud:&gate-install-rpm + sudo: yes + tasks: + - name: install the gating_repo rpm we previously built + shell: yum -y install /home/stack/*.rpm + +- name: Update all packages + hosts: undercloud:&gate-delorean + tasks: + - yum: name=* state=latest + sudo: yes + +- include: pre.yml +- include: "pre-{{ installer.env.type }}.yml" +- include: run.yml +- include: "post-{{ installer.env.type }}.yml" +- include: post.yml + diff --git a/playbooks/installer/rdo-manager/undercloud/main.yml b/playbooks/installer/rdo-manager/undercloud/main.yml index 285b27561..63cca9ce3 100644 --- a/playbooks/installer/rdo-manager/undercloud/main.yml +++ b/playbooks/installer/rdo-manager/undercloud/main.yml @@ -1,14 +1,7 @@ --- -- name: clean up rdo-manager virthost - hosts: virthost - vars: - - ansible_ssh_user: root - roles: - - { role: cleanup_nodes/rdo-manager, - when: (installer.type == "rdo-manager" and provisioner.type == "manual") - } - -- include: pre-{{ installer.env.type }}.yml +- include: pre.yml +- include: "pre-{{ installer.env.type }}.yml" - include: run.yml -- include: build-images.yml +- include: "post-{{ installer.env.type }}.yml" +- include: post.yml diff --git a/playbooks/installer/rdo-manager/undercloud/post-baremetal.yml b/playbooks/installer/rdo-manager/undercloud/post-baremetal.yml new file mode 100644 index 000000000..c44e36d4d --- /dev/null +++ b/playbooks/installer/rdo-manager/undercloud/post-baremetal.yml @@ -0,0 +1,45 @@ +--- +- name: Execute vendor-specific setup for baremetal environment + hosts: undercloud:&baremetal + tasks: + - name: copy vendor-specific setup file + synchronize: > + src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/vendor_specific_setup dest={{ instack_user_home }}/vendor_specific_setup + delegate_to: local + when: hw_env.env_type != 'ovb_host_cloud' + + - name: copy over vendor-specific setup file (quintupleo_host_cloud) + local_action: command rsync --delay-updates -F --compress --archive --rsh "ssh -i {{ provisioner.key_file }} -F {{base_dir}}/khaleesi/ssh.config.ansible -S none -o StrictHostKeyChecking=no" {{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/vendor_specific_setup undercloud:{{ instack_user_home }}/vendor_specific_setup + when: hw_env.env_type == 'ovb_host_cloud' + + - name: execute vendor-specific setup + shell: > + chmod 755 {{ instack_user_home }}/vendor_specific_setup; + {{ instack_user_home }}/vendor_specific_setup + +- name: Set ironic to control the power state + hosts: undercloud:&baremetal + tasks: + - name: get power state from /etc/ironic/ironic.conf (workaround for bz 1246641) + sudo: yes + shell: > + sudo cat /etc/ironic/ironic.conf | grep 'force_power_state_during_sync=False' + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: allow ironic to control the power state (workaround for bz 1246641) + sudo: yes + shell: > + sed -i 's/force_power_state_during_sync=False/force_power_state_during_sync=True/g' /etc/ironic/ironic.conf + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: get power state from /etc/ironic/ironic.conf (workaround for bz 1246641) + sudo: yes + shell: > + sudo cat /etc/ironic/ironic.conf | grep 'force_power_state_during_sync=True' + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: restart openstack-ironic-conductor (workaround for bz 1246641) + sudo: yes + shell: > + systemctl restart openstack-ironic-conductor + when: workarounds.enabled is defined and workarounds.enabled|bool diff --git a/playbooks/installer/rdo-manager/undercloud/post-virthost.yml b/playbooks/installer/rdo-manager/undercloud/post-virthost.yml new file mode 100644 index 000000000..f7149cfc7 --- /dev/null +++ b/playbooks/installer/rdo-manager/undercloud/post-virthost.yml @@ -0,0 +1,17 @@ +--- +- name: setup networking on virt for network isolation + hosts: undercloud:&virthost + tasks: + - name: net-iso virt setup vlans (ipv4) + shell: > + source {{ instack_user_home }}/stackrc; + sudo ovs-vsctl add-port br-ctlplane vlan10 tag=10 -- set interface vlan10 type=internal; + sudo ip l set dev vlan10 up; sudo ip addr add 172.16.23.251/24 dev vlan10; + when: installer.network.isolation == 'single_nic_vlans' + + - name: net-iso virt setup vlans (ipv6) + shell: > + source {{ instack_user_home }}/stackrc; + sudo ovs-vsctl add-port br-ctlplane vlan10 tag=10 -- set interface vlan10 type=internal; + sudo ip l set dev vlan10 up; sudo ip addr add 2001:db8:fd00:1000:dead:beef:cafe:f00/64 dev vlan10; + when: installer.network.isolation == 'single_nic_vlans_ipv6' diff --git a/playbooks/installer/rdo-manager/undercloud/post.yml b/playbooks/installer/rdo-manager/undercloud/post.yml new file mode 100644 index 000000000..9df7a6dac --- /dev/null +++ b/playbooks/installer/rdo-manager/undercloud/post.yml @@ -0,0 +1,45 @@ +--- +- name: undercloud post install workarounds + hosts: undercloud + tasks: + - name: disable haproxy check (workaround bug bz 1246525) + sudo: yes + replace: dest=/etc/haproxy/haproxy.cfg regexp='(listen ironic\n.*\n.*)\n.*option httpchk GET \/' replace='\1' + when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists + + - name: restart haproxy service (workaround bug bz 1246525) + sudo: yes + command: systemctl restart haproxy + when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists + + - name: increase stack_action_timeout to 4 hours (workaround for bz 1243365) + sudo: yes + command: openstack-config --set /etc/heat/heat.conf DEFAULT stack_action_timeout 14400 + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: restart openstack-heat-engine (workaround for bz 1243365) + sudo: yes + command: systemctl restart openstack-heat-engine + when: workarounds.enabled is defined and workarounds.enabled|bool + + - name: check if haproxy is present (workaround bug bz 1246525) + stat: path=/etc/haproxy/haproxy.cfg + register: ha_config_file + + - name: disable haproxy check (workaround bug bz 1246525) + sudo: yes + replace: dest=/etc/haproxy/haproxy.cfg regexp='(listen ironic\n.*\n.*)\n.*option httpchk GET \/' replace='\1' + when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists + + - name: restart haproxy service (workaround bug bz 1246525) + sudo: yes + command: systemctl restart haproxy + when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists + +- name: update neutron values for undercloud + hosts: undercloud + tasks: + - name: update neutron quota to unlimited + shell: > + source {{ instack_user_home }}/stackrc; + neutron quota-update --port -1; diff --git a/playbooks/installer/rdo-manager/undercloud/pre-baremetal.yml b/playbooks/installer/rdo-manager/undercloud/pre-baremetal.yml index 1465740b8..5ab77e744 100644 --- a/playbooks/installer/rdo-manager/undercloud/pre-baremetal.yml +++ b/playbooks/installer/rdo-manager/undercloud/pre-baremetal.yml @@ -1,159 +1,115 @@ --- -- name: Update packages on the host - hosts: undercloud - vars: - - ansible_ssh_user: root +- name: Customize the answer file for baremetal deployment + hosts: undercloud:&baremetal tasks: - - name: repolist - command: yum -d 7 repolist - - - name: update all packages - yum: name=* state=latest - -- name: Create the stack user on the undercloud - hosts: undercloud - vars: - - ansible_ssh_user: root - tasks: - - name: delete user (workaround for BZ 1284717) - user: name="{{ provisioner.remote_user }}" state=absent remove=yes force=yes - tags: workaround - - - name: inspect user removal - shell: > - ls /home | grep "{{ provisioner.remote_user }}"; - cat /etc/passwd | grep "{{ provisioner.remote_user }}"; - cat /etc/group | grep "{{ provisioner.remote_user }}"; - cat /etc/shadow | grep "{{ provisioner.remote_user }}"; - ls /var/spool/mail | grep "{{ provisioner.remote_user }}"; - register: result - failed_when: result.rc != 1 - - - name: create user - user: name="{{ provisioner.remote_user }}" state=present password=stack - - - name: copy the .bash_profile file - command: cp /root/.bash_profile /home/{{ provisioner.remote_user }}/ - - - name: create .ssh dir - file: path=/home/{{ provisioner.remote_user }}/.ssh mode=0700 owner=stack group=stack state=directory - - - name: copy the authorized_keys file - command: cp /root/.ssh/authorized_keys /home/{{ provisioner.remote_user }}/.ssh/ - - - name: set file permissions on authorized_hosts - file: path=/home/{{ provisioner.remote_user }}/.ssh/authorized_keys mode=0600 owner=stack group=stack - - - name: copy ssh keys - command: cp /root/.ssh/id_rsa /home/{{ provisioner.remote_user }}/.ssh/ - when: hw_env.env_type == 'ovb_host_cloud' - - - name: copy ssh pub keys - command: cp /root/.ssh/id_rsa.pub /home/{{ provisioner.remote_user }}/.ssh/ - when: hw_env.env_type == 'ovb_host_cloud' - - - name: set permission on keys - file: path=/home/{{ provisioner.remote_user }}/.ssh/id_rsa mode=0600 owner=stack group=stack - when: hw_env.env_type == 'ovb_host_cloud' - - - name: set permission on pub keys - file: path=/home/{{ provisioner.remote_user }}/.ssh/id_rsa.pub mode=0644 owner=stack group=stack - when: hw_env.env_type == 'ovb_host_cloud' - - - name: add user to sudoers - lineinfile: dest=/etc/sudoers line="stack ALL=(root) NOPASSWD:ALL" - - - name: set fact for the stack user home - set_fact: instack_user_home=/home/{{ provisioner.remote_user }} - - - name: enabling ip forwarding - lineinfile: dest=/etc/sysctl.conf line='net.ipv4.ip_forward = 1' insertafter=EOF state=present - when: hw_env.ip_forwarding is defined and hw_env.ip_forwarding == 'true' - - - name: check ip forwarding - shell: sysctl -p /etc/sysctl.conf - when: hw_env.ip_forwarding is defined and hw_env.ip_forwarding == 'true' - -- include: repo-{{ product.name }}.yml repo_host=undercloud - -- name: Configure the baremetal undercloud - hosts: undercloud - tasks: - - name: check if instackenv.json exists in root - stat: path="/root/instackenv.json" - register: instackenv_json_root - sudo_user: root - sudo: yes - - - name: copy instackenv.json from root if it exists there - shell: cp /root/instackenv.json {{ instack_user_home }}/instackenv.json - when: instackenv_json_root.stat.exists == True - sudo_user: root - sudo: yes - - - name: get instackenv.json - synchronize: src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/instackenv.json dest={{ instack_user_home }}/instackenv.json - when: instackenv_json_root.stat.exists == False - - - name: chown instackenv.json - file: path={{ instack_user_home }}/instackenv.json owner=stack group=stack - sudo_user: root - sudo: yes - - - name: install ipmitool - yum: name={{ item }} state=latest - with_items: - - OpenIPMI - - OpenIPMI-tools - sudo_user: root - sudo: yes - - - name: install sshpass - DRACS - yum: name=sshpass state=latest - sudo_user: root - sudo: yes - when: hw_env.remote_mgmt == "dracs" - - - name: start IMPI service - shell: > - sudo chkconfig ipmi on; - sudo service ipmi start - - - name: get tools to validate instackenv.json/nodes.json - git: > - repo="https://github.com/rthallisey/clapper.git" - dest="{{instack_user_home}}/clapper" - - - name: validate instackenv.json - shell: > - chdir={{instack_user_home}} - python clapper/instackenv-validator.py -f {{ instack_user_home }}/instackenv.json - register: instackenv_validator_output - - - name: fail if instackenv.json fails validation - fail: msg="instackenv.json didn't validate." - when: instackenv_validator_output.stdout.find("SUCCESS") == -1 - - - name: get number of overcloud nodes - shell: > - export IP_LENGTH=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_addr.*' | cut -f2- -d':' | wc -l`); - echo $(($IP_LENGTH)) - register: node_length - - - name: power off node boxes - IPMI - shell: > - export IP=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_addr.*' | cut -f2- -d':' | sed 's/[},\"]//g'`); - export USER=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_user.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); - export PASSWORD=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_password.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); - ipmitool -I lanplus -H ${IP[item]} -U ${USER[item]} -P ${PASSWORD[item]} power off - with_sequence: count="{{node_length.stdout}}" - when: hw_env.remote_mgmt == "ipmi" - - - name: power off node boxes - DRACS - shell: > - export IP=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_addr.*' | cut -f2- -d':' | sed 's/[},\"]//g'`); - export USER=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_user.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); - export PASSWORD=(`cat {{ instack_user_home }}/instackenv.json | grep -o 'pm_password.*' | cut -f2- -d':' |rev | cut -c 2- | rev | sed 's/[},\"]//g'`); - sshpass -p ${PASSWORD[item]} ssh -o "StrictHostKeyChecking=no" ${USER[item]}@${IP[item]} "racadm serveraction powerdown" - with_sequence: count="{{node_length.stdout}}" - when: hw_env.remote_mgmt == "dracs" + - name: check if answers file exists + stat: path="/usr/share/instack-undercloud/instack.answers.sample" + register: answers_file_present + + - name: check if conf file exists + stat: path="/usr/share/instack-undercloud/undercloud.conf.sample" + register: conf_file_present + + - name: fail if there is no answers file and no conf file + fail: msg="Neither a conf file nor an answers file exists" + when: answers_file_present.stat.exists == False and conf_file_present.stat.exists == False + + - name: copy baremetal answers file + shell: cp /usr/share/instack-undercloud/instack.answers.sample {{ instack_user_home }}/instack.answers + when: answers_file_present.stat.exists == True + + - name: edit instack.answers file - local_interface + lineinfile: dest={{ instack_user_home }}/instack.answers regexp=^LOCAL_INTERFACE line=LOCAL_INTERFACE={{ hw_env.answers_local_interface }} + when: answers_file_present.stat.exists == True + + - name: edit instack.answers file - network + replace: dest={{ instack_user_home }}/instack.answers regexp='192.0.2' replace={{ hw_env.network }} + when: hw_env.network is defined and answers_file_present.stat.exists == True + + - name: edit instack.answers file - network gateway + lineinfile: dest={{ instack_user_home }}/instack.answers regexp=^NETWORK_GATEWAY line=NETWORK_GATEWAY={{ hw_env.network_gateway }} + when: answers_file_present.stat.exists == True + + - name: copy baremetal conf file + shell: cp /usr/share/instack-undercloud/undercloud.conf.sample {{ instack_user_home }}/undercloud.conf + when: conf_file_present.stat.exists == True + + - name: edit undercloud.conf file - local_interface + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#local_interface line=local_interface={{ hw_env.answers_local_interface }} + when: conf_file_present.stat.exists == True + + - name: edit undercloud.conf file - dhcp_start + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#dhcp_start line=dhcp_start={{ hw_env.dhcp_start }} + when: conf_file_present.stat.exists == True and hw_env.dhcp_start is defined + + - name: edit undercloud.conf file - dhcp_end + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#dhcp_end line=dhcp_end={{ hw_env.dhcp_end }} + when: conf_file_present.stat.exists == True and hw_env.dhcp_end is defined + + - name: edit undercloud.conf file - discovery_iprange + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#discovery_iprange line=discovery_iprange={{ hw_env.discovery_iprange }} + when: conf_file_present.stat.exists == True and hw_env.discovery_iprange is defined + + - name: edit undercloud.conf file - network_gateway + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#network_gateway line=network_gateway={{ hw_env.undercloud_network_gateway }} + when: conf_file_present.stat.exists == True and hw_env.undercloud_network_gateway is defined + + - name: edit undercloud.conf file - local_ip + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#local_ip line=local_ip={{ hw_env.undercloud_local_ip }} + when: conf_file_present.stat.exists == True and hw_env.undercloud_local_ip is defined + + - name: edit undercloud.conf file - undercloud_public_vip + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#undercloud_public_vip line=undercloud_public_vip={{ hw_env.undercloud_public_vip }} + when: conf_file_present.stat.exists == True and hw_env.undercloud_public_vip is defined + + - name: edit undercloud.conf file - undercloud_admin_vip + lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#undercloud_admin_vip line=undercloud_admin_vip={{ hw_env.undercloud_admin_vip }} + when: conf_file_present.stat.exists == True and hw_env.undercloud_admin_vip is defined + + - name: edit undercloud.conf file - network + shell: > + sed -i 's/192.0.2/{{ hw_env.network }}/g' {{ instack_user_home }}/undercloud.conf; + sed -i '/{{ hw_env.network }}/s/#//g' {{ instack_user_home }}/undercloud.conf + when: hw_env.network is defined and conf_file_present.stat.exists == True + + - name: register short hostname + shell: "hostname -s" + register: short_hostname + + - name: register full hostname + shell: "cat /etc/hostname" + register: full_hostname + + - name: set the hostname + sudo: yes + shell: > + hostnamectl set-hostname {{ full_hostname.stdout }}; + hostnamectl set-hostname --transient {{ full_hostname.stdout }} + + - name: Set /etc/hostname for those that need it + sudo: yes + lineinfile: > + dest=/etc/hosts + line="127.0.1.1 {{ short_hostname.stdout }} {{ full_hostname.stdout }}" + + - name: get domain from /etc/resolv.conf + sudo: yes + shell: cat /etc/resolv.conf | grep search | sed -n -e 's/^.*search //p' + register: search_domain + + - name: add short and full hostname to /etc/hosts + sudo: yes + shell: "sed -i 's/localhost4.localdomain4/localhost4.localdomain4 {{ short_hostname.stdout }} {{ full_hostname.stdout }} {{ short_hostname.stdout }}.{{ search_domain.stdout }}/g' /etc/hosts" + + - name: add images and templates folders + shell: > + mkdir {{ instack_user_home }}/images; + mkdir {{ instack_user_home }}/templates + when: hw_env.env_type == 'scale_lab' + + - name: copy instackenv.json to nodes.json + shell: cp {{ instack_user_home }}/instackenv.json {{ instack_user_home }}/nodes.json + + - name: installing python-six (workaround) + sudo: yes + yum: name=python-six state=present diff --git a/playbooks/installer/rdo-manager/undercloud/pre-virthost.yml b/playbooks/installer/rdo-manager/undercloud/pre-virthost.yml index fb55d9819..21314c656 100644 --- a/playbooks/installer/rdo-manager/undercloud/pre-virthost.yml +++ b/playbooks/installer/rdo-manager/undercloud/pre-virthost.yml @@ -1,301 +1,12 @@ --- -- name: Create the stack user on the virthost - hosts: virthost +- name: Update packages on the host + hosts: undercloud vars: - - ansible_ssh_user: root + - ansible_user: root tasks: - - name: create user - user: name="{{ provisioner.remote_user }}" state=present password=stack + - name: repolist + command: yum -d 7 repolist - - name: copy the .bash_profile file - command: cp /root/.bash_profile /home/{{ provisioner.remote_user }}/ + - name: update all packages + yum: name=* state=latest - - name: set file permissions on .bash_profile - file: path=/home/{{ provisioner.remote_user }}/.bash_profile mode=0755 owner={{ provisioner.remote_user }} group={{ provisioner.remote_user }} - - - name: create .ssh dir - file: path=/home/{{ provisioner.remote_user }}/.ssh mode=0700 owner={{ provisioner.remote_user }} group=stack state=directory - - - name: copy the authorized_keys file - command: cp /root/.ssh/authorized_keys /home/{{ provisioner.remote_user }}/.ssh/ - - - name: set file permissions on authorized_hosts - file: path=/home/{{ provisioner.remote_user }}/.ssh/authorized_keys mode=0600 owner={{ provisioner.remote_user }} group={{ provisioner.remote_user }} - - - name: add user to sudoers - lineinfile: dest=/etc/sudoers line="{{ provisioner.remote_user }} ALL=(root) NOPASSWD:ALL" - - - name: set fact for the stack user home - set_fact: instack_user_home=/home/{{ provisioner.remote_user }} - -- include: repo-{{ product.name }}.yml repo_host=virthost - -- name: Copy the gating package - hosts: virthost - tasks: - - name: copy downstream rpm package - copy: src={{ item }} dest=/home/{{ ansible_ssh_user }}/ - with_fileglob: - - "{{ lookup('env', 'PWD') }}/generated_rpms/*.rpm" - when: gating_repo is defined - -- name: setup the virt host - hosts: virthost - tasks: - - name: install the generated rpm - shell: "yum install -y /home/{{ ansible_ssh_user }}/{{gating_repo}}*.rpm" - sudo: yes - when: gating_repo is defined - -- name: setup the virt host - hosts: virthost - tasks: - - name: set fact stack user home - set_fact: instack_user_home=/home/{{ provisioner.remote_user }} - - - name: get the guest-image - sudo: yes - get_url: > - url="{{ distro.images[distro.name][distro.full_version].remote_file_server }}{{ distro.images[distro.name][distro.full_version].guest_image_name }}" - dest=/root/{{ distro.images[distro.name][distro.full_version].guest_image_name }} - - - name: copy the guest-image in stack user home - sudo: yes - command: cp /root/{{ distro.images[distro.name][distro.full_version].guest_image_name }} {{instack_user_home}}/{{ distro.images[distro.name][distro.full_version].guest_image_name }} - - - name: set the right permissions for the guest-image - sudo: yes - file: > - path={{instack_user_home}}/{{ distro.images[distro.name][distro.full_version].guest_image_name }} - owner={{ provisioner.remote_user }} - group={{ provisioner.remote_user }} - - - name: install yum-plugin-priorities for rdo-manager - yum: name={{item}} state=present - sudo: yes - with_items: - - yum-plugin-priorities - when: product.name == "rdo" - - - name: install rdo-manager-deps - yum: name={{item}} state=present - sudo: yes - with_items: - - python-tripleoclient - when: product.name == "rdo" or product.full_version == "8-director" - - - name: install python-rdomanager-oscplugin - yum: name=python-rdomanager-oscplugin state=present - sudo: yes - - - name: setup environment vars - template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/virt-setup-env.j2 dest=~/virt-setup-env mode=0755 - - - name: Contents of virt-setup-env - shell: > - cat {{ instack_user_home }}/virt-setup-env - - - name: Patch instack-virt-setup to ensure dhcp.leases is not used to determine ip (workaround https://review.openstack.org/#/c/232584) - sudo: yes - lineinfile: - dest=/usr/bin/instack-virt-setup - regexp="/var/lib/libvirt/dnsmasq/default.leases" - line=" IP=$(ip n | grep $(tripleo get-vm-mac $UNDERCLOUD_VM_NAME) | awk '{print $1;}')" - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: run instack-virt-setup - shell: > - source {{ instack_user_home }}/virt-setup-env; - instack-virt-setup > {{ instack_user_home }}/instack-virt-setup.log; - register: instack_virt_setup_result - ignore_errors: yes - - - name: destroy default pool - command: virsh pool-destroy default - sudo: yes - ignore_errors: true - when: "instack_virt_setup_result.rc !=0" - - - name: update libvirtd unix_sock_group - lineinfile: dest=/etc/libvirt/libvirtd.conf - regexp=^unix_sock_group - line='unix_sock_group = "{{ provisioner.remote_user }}"' - when: "instack_virt_setup_result.rc !=0" - sudo: yes - - - name: remove libvirt qemu capabilities cache - command: rm -Rf /var/cache/libvirt/qemu/capabilities/ - sudo: yes - when: "instack_virt_setup_result.rc != 0" - # more workaround for the SATA error RHBZ#1195882 - - - name: restart libvirtd - service: name=libvirtd state=restarted - sudo: yes - when: "instack_virt_setup_result.rc != 0" - - - name: inspect virsh capabilities - command: 'virsh capabilities' - when: "instack_virt_setup_result.rc != 0" - - - name: stop virbr0 - command: ip link set virbr0 down - sudo: yes - ignore_errors: true - when: "instack_virt_setup_result.rc != 0" - - - name: delete libvirt bridge virbr0 - command: brctl delbr virbr0 - sudo: yes - ignore_errors: true - when: "instack_virt_setup_result.rc != 0" - - - name: start default libvirt network - command: virsh net-start default - sudo: yes - ignore_errors: true - when: "instack_virt_setup_result.rc != 0" - - - name: delete instack domain before re-try of instack-virt-setup - command: virsh undefine instack - sudo: yes - ignore_errors: true - when: "instack_virt_setup_result.rc !=0" - - - name: retry run instack-virt-setup - shell: > - source {{ instack_user_home }}/virt-setup-env; - instack-virt-setup > {{ instack_user_home }}/instack-virt-setup-retry.log; - when: "instack_virt_setup_result.rc !=0" - - - name: print out all the VMs - shell: > - sudo virsh list --all - - - name: get undercloud vm ip address - shell: > - export PATH='/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/stack/bin'; - ip n | grep $(tripleo get-vm-mac instack) | awk '{print $1;}' - when: undercloud_ip is not defined - register: undercloud_vm_ip_result - - - name: set_fact for undercloud ip - set_fact: undercloud_ip={{ undercloud_vm_ip_result.stdout }} - -- name: setup the virt host - hosts: localhost - tasks: - - name: set_fact for undercloud ip - set_fact: undercloud_ip={{ hostvars['host0'].undercloud_ip }} - - - name: debug undercloud_ip - debug: var=hostvars['localhost'].undercloud_ip - -- name: setup the virt host - hosts: virthost - tasks: - - name: wait until ssh is available on undercloud node - wait_for: host={{ hostvars['localhost'].undercloud_ip }} - state=started - port=22 - delay=15 - timeout=300 - - - name: add undercloud host - add_host: - name=undercloud - groups=undercloud - ansible_ssh_host=undercloud - ansible_fqdn=undercloud - ansible_ssh_user="{{ provisioner.remote_user }}" - ansible_ssh_private_key_file="{{ provisioner.key_file }}" - gating_repo="{{ gating_repo is defined and gating_repo }}" - - - name: setup ssh config - template: src={{ base_dir }}/khaleesi/playbooks/installer/rdo-manager/templates/ssh_config.j2 dest=~/ssh.config.ansible mode=0755 - - - name: copy ssh_config back to the slave - fetch: src=~/ssh.config.ansible dest="{{ base_dir }}/khaleesi/ssh.config.ansible" flat=yes - - - name: copy id_rsa key back to the slave - fetch: src=~/.ssh/id_rsa dest="{{ base_dir }}/khaleesi/id_rsa_virt_host" flat=yes - - - name: copy undercloud root user authorized_keys to stack user - shell: 'ssh -F ssh.config.ansible undercloud-from-virthost "cp /root/.ssh/authorized_keys /home/stack/.ssh/"' - - - name: chown authorized_keys for stack user - shell: 'ssh -F ssh.config.ansible undercloud-from-virthost "chown stack:stack /home/stack/.ssh/authorized_keys"' - - - name: copy gating_repo package - shell: > - scp -F ssh.config.ansible /home/{{ ansible_ssh_user }}/{{ gating_repo }}*.rpm undercloud-from-virthost:{{ instack_user_home }}/ - when: gating_repo is defined - -- name: regenerate the inventory file after adding hosts - hosts: localhost - tasks: - - name: create inventory from template - template: - dest: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" - src: "{{ base_dir }}/khaleesi/playbooks/provisioner/templates/inventory.j2" - - - name: symlink inventory to a static name - file: - dest: "{{ lookup('env', 'PWD') }}/hosts" - state: link - src: "{{ lookup('env', 'PWD') }}/{{ tmp.node_prefix }}hosts" - -- name: copy the guest image to the undercloud - hosts: virthost - tasks: - - name: upload the guest-image on the undercloud - command: scp -F ssh.config.ansible {{instack_user_home}}/{{ distro.images[distro.name][distro.full_version].guest_image_name }} undercloud-from-virthost:{{ instack_user_home }}/ - -- name: test host connection - hosts: all:!localhost - tasks: - - name: test ssh - command: hostname - - - name: check distro - command: cat /etc/redhat-release - - - name: set fact stack user home - set_fact: instack_user_home=/home/{{ provisioner.remote_user }} - -- include: repo-{{ product.name }}.yml repo_host=undercloud - -- name: Group all hosts in gate if we are gating using delorean - hosts: all - tasks: - - group_by: key=gate-delorean - when: use_delorean is defined and use_delorean - -- name: Run Delorean - hosts: virthost:&gate-delorean - roles: - - delorean - -- name: Create local repo for delorean rpms - hosts: undercloud:&gate-delorean - roles: - - delorean_rpms - -- name: Update all packages - hosts: undercloud:&gate-delorean - tasks: - - yum: name=* state=latest - sudo: yes - -- name: Group all hosts in gate if we are gating - hosts: all - tasks: - - group_by: key=gate-install-rpm - when: gating_repo is defined - -- name: Install the custom rpm when gating - hosts: undercloud:&gate-install-rpm - sudo: yes - tasks: - - name: install the gating_repo rpm we previously built - shell: yum -y install /home/stack/{{ gating_repo }}*.rpm diff --git a/playbooks/installer/rdo-manager/undercloud/pre.yml b/playbooks/installer/rdo-manager/undercloud/pre.yml new file mode 100644 index 000000000..ce00e273a --- /dev/null +++ b/playbooks/installer/rdo-manager/undercloud/pre.yml @@ -0,0 +1,26 @@ +--- +- name: install the undercloud packages + hosts: undercloud + tasks: + - name: install yum-plugin-priorities rdo-manager + sudo: yes + yum: name={{item}} state=present + with_items: + - yum-plugin-priorities + when: product.name == "rdo" + + - name: install rdo-manager-deps + sudo: yes + yum: name={{item}} state=present + with_items: + - python-tripleoclient + when: product.name == "rdo" or product.full_version == "8-director" + + - name: install python-rdomanager-oscplugin + sudo: yes + yum: name=python-rdomanager-oscplugin state=present + when: product.full_version == "7-director" + + - name: install python-passlib + sudo: yes + yum: name=python-passlib state=present diff --git a/playbooks/installer/rdo-manager/undercloud/repo-rdo.yml b/playbooks/installer/rdo-manager/undercloud/repo-rdo.yml deleted file mode 100644 index a6ddc09ed..000000000 --- a/playbooks/installer/rdo-manager/undercloud/repo-rdo.yml +++ /dev/null @@ -1,96 +0,0 @@ ---- -- include: "{{ base_dir }}/khaleesi/playbooks/group_by.yml ansible_ssh_user=root" - -- name: Setup openstack repos - hosts: "{{ repo_host }}" - vars: - - ansible_ssh_user: root - - product_override_version: 7-director - gather_facts: yes - tasks: - - name: clean release rpms - yum: name={{ item }} state=absent - with_items: - - rdo-release* - - epel-release - - rhos-release - - - name: remove any yum repos not owned by rpm - shell: rm -Rf /etc/yum.repos.d/{{ item }} - with_items: - - beaker-* - - - name: Install release tool on machine - command: "yum localinstall -y {{ product.rpmrepo[ansible_distribution] }}" - when: product.repo_type is defined and product.repo_type == 'production' - - #remove this step when rdo and rhos diverge - - name: Install extra release tool on machine - command: "yum localinstall -y {{ product.rpmrepo_override[ansible_distribution] }}" - when: product_override_version is defined and product.repo_type_override == 'rhos-release' - - #remove this step when rdo and rhos diverge - - name: Execute rhos-release for rdo-manager (rdo) - command: "rhos-release {{ product_override_version }}" - when: product_override_version is defined and product.repo_type_override == 'rhos-release' - - - name: Install epel release - command: "yum localinstall -y {{ distro.epel_release }}" - - - name: yum clean all - command: yum clean all - -- name: RHEL RDO prep - hosts: "{{ repo_host }}:&RedHat" - vars: - - ansible_ssh_user: root - roles: - # enable this role when rdo and rhos officially diverge - #- { role: linux/rhel/rdo } - - { role: product/rdo/rhel } - -- name: CentOS RDO prep - hosts: "{{ repo_host }}:&CentOS" - vars: - - ansible_ssh_user: root - roles: - - { role: linux/centos } - - { role: product/rdo/rhel } - -- name: Linux common prep (Collect performance data, etc.) - hosts: "{{ repo_host }}" - vars: - - ansible_ssh_user: root - roles: - - { role: linux-common } - -- name: Update packages on the host - hosts: "{{ repo_host }}" - vars: - - ansible_ssh_user: root - tasks: - - name: repolist - command: yum -d 7 repolist - - - name: update all packages - yum: name=* state=latest - when: yum_update | bool - - - name: Find if a new kernel was installed - shell: find /boot/ -anewer /proc/1/stat -name 'initramfs*' | egrep ".*" - register: new_kernel - ignore_errors: True - when: "'{{ repo_host }}' == 'virthost'" - - - name: reboot host - sudo: no - local_action: - wait_for_ssh - reboot_first=true - host="{{ ansible_ssh_host }}" - user="root" - ssh_opts="-F {{ base_dir }}/khaleesi/ssh.config.ansible" - key="{{ ansible_ssh_private_key_file }}" - timeout=900 - sudo=false - when: "'{{ repo_host }}' == 'virthost' and new_kernel.rc == 0" diff --git a/playbooks/installer/rdo-manager/undercloud/run.yml b/playbooks/installer/rdo-manager/undercloud/run.yml index 400d33825..76c323aab 100644 --- a/playbooks/installer/rdo-manager/undercloud/run.yml +++ b/playbooks/installer/rdo-manager/undercloud/run.yml @@ -1,169 +1,25 @@ --- -- name: install the undercloud packages and get the guest image - hosts: undercloud - tasks: - - name: get the guest-image - get_url: > - url="{{ distro.images[distro.name][distro.full_version].remote_file_server }}{{ distro.images[distro.name][distro.full_version].guest_image_name }}" - dest=/home/stack/{{ distro.images[distro.name][distro.full_version].guest_image_name }} - timeout=360 - - - name: install python-rdomanager-oscplugin - yum: name=python-rdomanager-oscplugin state=present - sudo: yes - - - name: install yum-plugin-priorities rdo-manager - yum: name={{item}} state=present - sudo: yes - with_items: - - yum-plugin-priorities - when: product.name == "rdo" - - - name: install rdo-manager-deps - yum: name={{item}} state=present - sudo: yes - with_items: - - python-tripleoclient - when: product.name == "rdo" or product.full_version == "8-director" - - - name: install python-rdomanager-oscplugin - yum: name=python-rdomanager-oscplugin state=present - sudo: yes - - - name: install python-passlib - yum: name=python-passlib state=present - sudo: yes - - -- name: Customize the answer file for baremetal deployment - hosts: undercloud:&baremetal - tasks: - - name: check if answers file exists - stat: path="/usr/share/instack-undercloud/instack.answers.sample" - register: answers_file_present - - - name: check if conf file exists - stat: path="/usr/share/instack-undercloud/undercloud.conf.sample" - register: conf_file_present - - - name: fail if there is no answers file and no conf file - fail: msg="Neither a conf file nor an answers file exists" - when: answers_file_present.stat.exists == False and conf_file_present.stat.exists == False - - - name: copy baremetal answers file - shell: cp /usr/share/instack-undercloud/instack.answers.sample {{ instack_user_home }}/instack.answers - when: answers_file_present.stat.exists == True - - - name: edit instack.answers file - local_interface - lineinfile: dest={{ instack_user_home }}/instack.answers regexp=^LOCAL_INTERFACE line=LOCAL_INTERFACE={{ hw_env.answers_local_interface }} - when: answers_file_present.stat.exists == True - - - name: edit instack.answers file - network - replace: dest={{ instack_user_home }}/instack.answers regexp='192.0.2' replace={{ hw_env.network }} - when: hw_env.network is defined and answers_file_present.stat.exists == True - - - name: edit instack.answers file - network gateway - lineinfile: dest={{ instack_user_home }}/instack.answers regexp=^NETWORK_GATEWAY line=NETWORK_GATEWAY={{ hw_env.network_gateway }} - when: answers_file_present.stat.exists == True - - - name: copy baremetal conf file - shell: cp /usr/share/instack-undercloud/undercloud.conf.sample {{ instack_user_home }}/undercloud.conf - when: conf_file_present.stat.exists == True - - - name: edit undercloud.conf file - local_interface - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#local_interface line=local_interface={{ hw_env.answers_local_interface }} - when: conf_file_present.stat.exists == True - - - name: edit undercloud.conf file - dhcp_start - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#dhcp_start line=dhcp_start={{ hw_env.dhcp_start }} - when: conf_file_present.stat.exists == True and hw_env.dhcp_start is defined - - - name: edit undercloud.conf file - dhcp_end - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#dhcp_end line=dhcp_end={{ hw_env.dhcp_end }} - when: conf_file_present.stat.exists == True and hw_env.dhcp_end is defined - - - name: edit undercloud.conf file - discovery_iprange - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#discovery_iprange line=discovery_iprange={{ hw_env.discovery_iprange }} - when: conf_file_present.stat.exists == True and hw_env.discovery_iprange is defined - - - name: edit undercloud.conf file - network_gateway - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#network_gateway line=network_gateway={{ hw_env.undercloud_network_gateway }} - when: conf_file_present.stat.exists == True and hw_env.undercloud_network_gateway is defined - - - name: edit undercloud.conf file - local_ip - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#local_ip line=local_ip={{ hw_env.undercloud_local_ip }} - when: conf_file_present.stat.exists == True and hw_env.undercloud_local_ip is defined - - - name: edit undercloud.conf file - undercloud_public_vip - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#undercloud_public_vip line=undercloud_public_vip={{ hw_env.undercloud_public_vip }} - when: conf_file_present.stat.exists == True and hw_env.undercloud_public_vip is defined - - - name: edit undercloud.conf file - undercloud_admin_vip - lineinfile: dest={{ instack_user_home }}/undercloud.conf regexp=^#undercloud_admin_vip line=undercloud_admin_vip={{ hw_env.undercloud_admin_vip }} - when: conf_file_present.stat.exists == True and hw_env.undercloud_admin_vip is defined - - - name: edit undercloud.conf file - network - shell: > - sed -i 's/192.0.2/{{ hw_env.network }}/g' {{ instack_user_home }}/undercloud.conf; - sed -i '/{{ hw_env.network }}/s/#//g' {{ instack_user_home }}/undercloud.conf - when: hw_env.network is defined and conf_file_present.stat.exists == True - - - name: register short hostname - shell: "hostname -s" - register: short_hostname - - - name: register full hostname - shell: "cat /etc/hostname" - register: full_hostname - - - name: set the hostname - shell: > - hostnamectl set-hostname {{ full_hostname.stdout }}; - hostnamectl set-hostname --transient {{ full_hostname.stdout }} - sudo: yes - - - name: Set /etc/hostname for those that need it - lineinfile: > - dest=/etc/hosts - line="127.0.1.1 {{ short_hostname.stdout }} {{ full_hostname.stdout }}" - sudo: yes - - - name: get domain from /etc/resolv.conf - shell: cat /etc/resolv.conf | grep search | sed -n -e 's/^.*search //p' - register: search_domain - sudo: yes - - - name: add short and full hostname to /etc/hosts - shell: "sed -i 's/localhost4.localdomain4/localhost4.localdomain4 {{ short_hostname.stdout }} {{ full_hostname.stdout }} {{ short_hostname.stdout }}.{{ search_domain.stdout }}/g' /etc/hosts" - sudo: yes - - - name: add images and templates folders - shell: > - mkdir {{ instack_user_home }}/images; - mkdir {{ instack_user_home }}/templates - when: hw_env.env_type == 'scale_lab' - - - name: copy instackenv.json to nodes.json - shell: cp {{ instack_user_home }}/instackenv.json {{ instack_user_home }}/nodes.json - - - name: installing python-six (workaround) - yum: name=python-six state=present - sudo: yes - - name: install the undercloud hosts: undercloud tasks: - - name: set selinux to permissive for ospd-8 (workaround bug bz 1284133) - selinux: policy=targeted state=permissive - sudo: yes - when: (workarounds['rhbz1280101']['enabled'] is defined and workarounds['rhbz1280101']['enabled'] | bool) - - name: update hosts file for localhost.localhost (workaround for puppet, discovered on centos7) lineinfile: dest=/etc/hosts line="127.0.0.1 localhost localhost.localhost" sudo: yes - name: install the undercloud shell: openstack undercloud install --debug &> {{ instack_user_home }}/undercloud_install_initial_install.log + ignore_errors: yes + register: uc_status + + - name: get overview about what went wrong in undercloud installation + shell: | + tail -n 200 {{ instack_user_home }}/undercloud_install_initial_install.log + ignore_errors: yes + when: uc_status.rc != 0 + + - name: check if undercloud failed + fail: msg="Undercloud install failed" + when: uc_status.rc != 0 - name: copy files to home sudo: yes @@ -184,121 +40,15 @@ tasks: - name: install the undercloud shell: openstack undercloud install &> {{ instack_user_home }}/undercloud_install_idempotent_check.log - -- name: undercloud post install workarounds - hosts: undercloud - tasks: - - name: disable haproxy check (workaround bug bz 1246525) - sudo: yes - replace: dest=/etc/haproxy/haproxy.cfg regexp='(listen ironic\n.*\n.*)\n.*option httpchk GET \/' replace='\1' - when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists - - - name: restart haproxy service (workaround bug bz 1246525) - command: systemctl restart haproxy - sudo: yes - when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists - - - name: increase stack_action_timeout to 4 hours (workaround for bz 1243365) - command: openstack-config --set /etc/heat/heat.conf DEFAULT stack_action_timeout 14400 - sudo: yes - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: restart openstack-heat-engine (workaround for bz 1243365) - command: systemctl restart openstack-heat-engine - sudo: yes - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: check if haproxy is present (workaround bug bz 1246525) - stat: path=/etc/haproxy/haproxy.cfg - register: ha_config_file - - - name: disable haproxy check (workaround bug bz 1246525) - sudo: yes - replace: dest=/etc/haproxy/haproxy.cfg regexp='(listen ironic\n.*\n.*)\n.*option httpchk GET \/' replace='\1' - when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists - - - name: restart haproxy service (workaround bug bz 1246525) - command: systemctl restart haproxy - sudo: yes - when: workarounds.enabled is defined and workarounds.enabled|bool and ha_config_file.stat.exists - -- name: Execute vendor-specific setup for baremetal environment - hosts: undercloud:&baremetal - tasks: - - name: copy vendor-specific setup file - synchronize: > - src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/vendor_specific_setup dest={{ instack_user_home }}/vendor_specific_setup - delegate_to: local - when: hw_env.env_type != 'ovb_host_cloud' - - - name: copy over vendor-specific setup file (quintupleo_host_cloud) - local_action: command rsync --delay-updates -F --compress --archive --rsh "ssh -i {{ provisioner.key_file }} -F {{base_dir}}/khaleesi/ssh.config.ansible -S none -o StrictHostKeyChecking=no" {{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/vendor_specific_setup undercloud:{{ instack_user_home }}/vendor_specific_setup - when: hw_env.env_type == 'ovb_host_cloud' - - - name: execute vendor-specific setup - shell: > - chmod 755 {{ instack_user_home }}/vendor_specific_setup; - {{ instack_user_home }}/vendor_specific_setup - -- name: Set ironic to control the power state - hosts: undercloud:&baremetal - tasks: - - name: get power state from /etc/ironic/ironic.conf (workaround for bz 1246641) - sudo: yes - shell: > - sudo cat /etc/ironic/ironic.conf | grep 'force_power_state_during_sync=False' - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: allow ironic to control the power state (workaround for bz 1246641) - sudo: yes - shell: > - sed -i 's/force_power_state_during_sync=False/force_power_state_during_sync=True/g' /etc/ironic/ironic.conf - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: get power state from /etc/ironic/ironic.conf (workaround for bz 1246641) - sudo: yes - shell: > - sudo cat /etc/ironic/ironic.conf | grep 'force_power_state_during_sync=True' - when: workarounds.enabled is defined and workarounds.enabled|bool - - - name: restart openstack-ironic-conductor (workaround for bz 1246641) - sudo: yes - shell: > - systemctl restart openstack-ironic-conductor - when: workarounds.enabled is defined and workarounds.enabled|bool - -- name: Execute vendor-specific setup for baremetal environment - hosts: undercloud:&baremetal - tasks: - - name: copy vendor-specific setup file - synchronize: > - src={{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/vendor_specific_setup dest={{ instack_user_home }}/vendor_specific_setup - delegate_to: local - when: hw_env.env_type != 'ovb_host_cloud' - - - name: copy over vendor-specific setup file (quintupleo_host_cloud) - local_action: command rsync --delay-updates -F --compress --archive --rsh "ssh -i {{ provisioner.key_file }} -F {{base_dir}}/khaleesi/ssh.config.ansible -S none -o StrictHostKeyChecking=no" {{base_dir}}/khaleesi-settings/hardware_environments/{{hw_env.env_type}}/vendor_specific_setup undercloud:{{ instack_user_home }}/vendor_specific_setup - when: hw_env.env_type == 'ovb_host_cloud' - - - name: execute vendor-specific setup - shell: > - chmod 755 {{ instack_user_home }}/vendor_specific_setup; - {{ instack_user_home }}/vendor_specific_setup - -- name: setup networking on virt for network isolation - hosts: undercloud:&virthost - tasks: - - name: net-iso virt setup vlans - when: installer.network.isolation == 'single_nic_vlans' - shell: > - source {{ instack_user_home }}/stackrc; - sudo ovs-vsctl add-port br-ctlplane vlan10 tag=10 -- set interface vlan10 type=internal; - sudo ip l set dev vlan10 up; sudo ip addr add 172.16.23.251/24 dev vlan10; - -- name: update neutron values for undercloud - hosts: undercloud - tasks: - - name: update neutron quota to unlimited - shell: > - source {{ instack_user_home }}/stackrc; - neutron quota-update --port -1; + ignore_errors: yes + register: uc_idemp_status + + - name: get overview about what went wrong in idempotent undercloud installation + shell: | + tail -n 200 {{ instack_user_home }}/undercloud_install_idempotent_check.log + ignore_errors: yes + when: uc_idemp_status.rc != 0 + + - name: check if idempotent undercloud installation failed + fail: msg="Undercloud install failed" + when: uc_idemp_status.rc != 0 diff --git a/playbooks/installer/rdo-manager/user/README.txt b/playbooks/installer/rdo-manager/user/README.txt new file mode 100644 index 000000000..81e18bb2d --- /dev/null +++ b/playbooks/installer/rdo-manager/user/README.txt @@ -0,0 +1,4 @@ +This playbook follows the documentation from tripleo as closely as possible + +The user playbooks have been broken out of the environment setup as they are used by multiple environments +http://docs.openstack.org/developer/tripleo-docs/environments/environments.html diff --git a/playbooks/installer/rdo-manager/user/main.yml b/playbooks/installer/rdo-manager/user/main.yml new file mode 100644 index 000000000..4d48e021f --- /dev/null +++ b/playbooks/installer/rdo-manager/user/main.yml @@ -0,0 +1,46 @@ +--- +- name: Create the stack user + hosts: "{{ host }}" + vars: + - ansible_user: root + tasks: + - name: create user + user: name="{{ provisioner.remote_user }}" state=present password=stack + + - name: copy the .bash_profile file + command: cp /root/.bash_profile /home/{{ provisioner.remote_user }}/ + + - name: set file permissions on .bash_profile + file: path=/home/{{ provisioner.remote_user }}/.bash_profile mode=0755 owner={{ provisioner.remote_user }} group={{ provisioner.remote_user }} + + - name: create .ssh dir + file: path=/home/{{ provisioner.remote_user }}/.ssh mode=0700 owner={{ provisioner.remote_user }} group=stack state=directory + + - name: copy the authorized_keys file + command: cp /root/.ssh/authorized_keys /home/{{ provisioner.remote_user }}/.ssh/ + + - name: set file permissions on authorized_hosts + file: path=/home/{{ provisioner.remote_user }}/.ssh/authorized_keys mode=0600 owner={{ provisioner.remote_user }} group={{ provisioner.remote_user }} + + - name: add user to sudoers + lineinfile: dest=/etc/sudoers line="{{ provisioner.remote_user }} ALL=(root) NOPASSWD:ALL" + + - name: set fact for the stack user home + set_fact: instack_user_home=/home/{{ provisioner.remote_user }} + + - name: copy ssh keys + command: cp /root/.ssh/id_rsa /home/{{ provisioner.remote_user }}/.ssh/ + when: hw_env.env_type == 'ovb_host_cloud' + + - name: copy ssh pub keys + command: cp /root/.ssh/id_rsa.pub /home/{{ provisioner.remote_user }}/.ssh/ + when: hw_env.env_type == 'ovb_host_cloud' + + - name: set permission on keys + file: path=/home/{{ provisioner.remote_user }}/.ssh/id_rsa mode=0600 owner=stack group=stack + when: hw_env.env_type == 'ovb_host_cloud' + + - name: set permission on pub keys + file: path=/home/{{ provisioner.remote_user }}/.ssh/id_rsa.pub mode=0644 owner=stack group=stack + when: hw_env.env_type == 'ovb_host_cloud' + diff --git a/playbooks/installer/rdo-manager/yum_repos/README.txt b/playbooks/installer/rdo-manager/yum_repos/README.txt new file mode 100644 index 000000000..22e111cb6 --- /dev/null +++ b/playbooks/installer/rdo-manager/yum_repos/README.txt @@ -0,0 +1,4 @@ +This playbook follows the documentation from tripleo as closely as possible + +The yum repository playbooks have been broken out of the environment setup as they are used by multiple environments +http://docs.openstack.org/developer/tripleo-docs/environments/environments.html diff --git a/playbooks/installer/rdo-manager/yum_repos/repo-rdo.yml b/playbooks/installer/rdo-manager/yum_repos/repo-rdo.yml new file mode 100644 index 000000000..90755836e --- /dev/null +++ b/playbooks/installer/rdo-manager/yum_repos/repo-rdo.yml @@ -0,0 +1,76 @@ +--- +- include: "{{ base_dir }}/khaleesi/playbooks/group_by.yml ansible_user=root" + +- name: RHEL RDO prep + hosts: "{{ repo_host }}:&RedHat" + vars: + - ansible_user: root + roles: + # enable this role when rdo and rhos officially diverge + #- { role: linux/rhel/rdo } + - { role: product/rdo/rhel } + +- name: CentOS RDO prep + hosts: "{{ repo_host }}:&CentOS" + vars: + - ansible_user: root + roles: + - { role: linux/centos } + - { role: product/rdo/rhel } + +- name: Linux common prep (Collect performance data, etc.) + hosts: "{{ repo_host }}" + vars: + - ansible_user: root + roles: + - { role: linux-common } + +- name: Enable EPEL + hosts: "{{ repo_host }}" + vars: + - ansible_user: root + tasks: + - name: Install epel release + command: "yum localinstall -y {{ distro.epel_release }}" + +- name: Add the RDO release repos + hosts: "{{ repo_host }}" + vars: + - ansible_user: root + tasks: + - name: Install rdo-release rpm + yum: + name: "{{ product.rpmrepo[ansible_distribution] }}" + state: present + when: product.repo_type == 'production' + +- name: Update packages on the host + hosts: "{{ repo_host }}" + vars: + - ansible_user: root + tasks: + - name: repolist + command: yum -d 7 repolist + + - name: update all packages + yum: name=* state=latest + when: yum_update | bool + + - name: Find if a new kernel was installed + shell: find /boot/ -anewer /proc/1/stat -name 'initramfs*' | egrep ".*" + register: new_kernel + ignore_errors: True + when: "'{{ repo_host }}' == 'virthost'" + + - name: reboot host + sudo: no + local_action: + wait_for_ssh + reboot_first=true + host="{{ ansible_host }}" + user="root" + ssh_opts="-F {{ base_dir }}/khaleesi/ssh.config.ansible" + key="{{ ansible_ssh_private_key_file }}" + timeout=900 + sudo=false + when: "'{{ repo_host }}' == 'virthost' and new_kernel.rc == 0" diff --git a/playbooks/installer/rdo-manager/undercloud/repo-rhos.yml b/playbooks/installer/rdo-manager/yum_repos/repo-rhos.yml similarity index 74% rename from playbooks/installer/rdo-manager/undercloud/repo-rhos.yml rename to playbooks/installer/rdo-manager/yum_repos/repo-rhos.yml index 93118dd72..35c062fe9 100644 --- a/playbooks/installer/rdo-manager/undercloud/repo-rhos.yml +++ b/playbooks/installer/rdo-manager/yum_repos/repo-rhos.yml @@ -1,25 +1,31 @@ --- -- include: "{{ base_dir }}/khaleesi/playbooks/group_by.yml ansible_ssh_user=root" +- include: "{{ base_dir }}/khaleesi/playbooks/group_by.yml ansible_user=root" - name: Setup openstack repos hosts: "{{ repo_host }}:&RedHat" vars: - - ansible_ssh_user: root - - product_override_version: 7 + - ansible_user: root + environment: + http_proxy: "{{ installer.http_proxy_url }}" gather_facts: yes tasks: - - name: clean release rpms - yum: name={{ item }} state=absent - with_items: - - rhos-release + - name: set proxy server for yum configuration + sudo: yes + lineinfile: dest=/etc/yum.conf line="proxy={{ installer.http_proxy_url }}" + when: installer.proxy not in ['none'] + + - name: rpm macro for proxy + sudo: yes + template: src=../templates/rpm.macros.proxy.j2 dest=/etc/rpm/macros.proxy + when: installer.proxy not in ['none'] - - name: remove any yum repos not owned by rpm - shell: rm -Rf /etc/yum.repos.d/{{ item }} - with_items: - - beaker-* + - name: Install release tool on machine + command: "rpm -i {{ product.rpm }}" + when: installer.proxy not in ['none'] - name: Install release tool on machine command: "yum localinstall -y {{ product.rpm }}" + when: installer.proxy in ['none'] #this will uncouple the virthost version from the undercloud and overcloud rhel versions - name: create directory for DIB yum repo configurations @@ -73,27 +79,35 @@ register: pinned_poodle when: product.repo_type in ['poodle'] and product.repo.poodle_pin_version == 'latest' + - name: Execute rhos-release for core rhos poodle (osp) + sudo: yes + shell: > + rhos-release -d -P {{ product.repo.core_product_version }}; + rhos-release -d -r {{ distro.full_version }} -t {{installer.dib_dir}} -P {{ product.repo.core_product_version }}; + register: pinned_poodle + when: product.repo_type in ['poodle'] and product.repo.poodle_pin_version == 'latest' + - name: yum clean all command: yum clean all # - name: Get build details # hosts: "{{ repo_host }}:&RedHat" # vars: -# - ansible_ssh_user: root +# - ansible_user: root # roles: # - build_mark/build - name: Linux common prep (Collect performance data, etc.) hosts: "{{ repo_host }}" vars: - - ansible_ssh_user: root + - ansible_user: root roles: - { role: linux-common } - name: Update packages on the host hosts: "{{ repo_host }}" vars: - - ansible_ssh_user: root + - ansible_user: root tasks: - name: repolist command: yum -d 7 repolist @@ -105,17 +119,17 @@ - name: Find if a new kernel was installed shell: find /boot/ -anewer /proc/1/stat -name 'initramfs*' | egrep ".*" register: new_kernel - ignore_errors: True + ignore_errors: true when: "'{{ repo_host }}' == 'virthost'" - name: reboot host sudo: no - local_action: - wait_for_ssh - reboot_first=true - host="{{ ansible_ssh_host }}" - user="root" - key="{{ ansible_ssh_private_key_file }}" - timeout=900 - sudo=false + delegate_to: localhost + wait_for_ssh: + reboot_first: true + host: "{{ ansible_host }}" + user: root + key: "{{ ansible_ssh_private_key_file }}" + timeout: 900 + sudo: false when: "'{{ repo_host }}' == 'virthost' and new_kernel.rc == 0" diff --git a/playbooks/post-deploy/packstack/opendaylight/configure_neutron.yml b/playbooks/post-deploy/packstack/opendaylight/configure_neutron.yml new file mode 100644 index 000000000..d7fd4b0a2 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/configure_neutron.yml @@ -0,0 +1,38 @@ +- name: Configure neutron to use opendaylight + hosts: controller + sudo: yes + tasks: + - name: set mechanism drivers + ini_file: + dest="/etc/neutron/plugins/ml2/ml2_conf.ini" + section="ml2" + option={{ item.option }} + value={{ item.value }} + with_items: + - { option: 'mechanism_drivers', value: 'opendaylight' } + - { option: 'tenant_network_types', value: 'vxlan' } + + - name: Add opendaylight to ML2 configuration + ini_file: + dest="/etc/neutron/plugins/ml2/ml2_conf.ini" + section="ml2_odl" + option={{ item.option }} + value={{ item.value }} + with_items: + - { option: 'password', value: 'admin' } + - { option: 'username', value: 'admin' } + - { option: 'url', value: 'http://{{ hostvars[provisioner.nodes.odl_controller.name].ansible_default_ipv4.address }}:8080/controller/nb/v2/neutron' } + + - name: Configure neutron to use OpenDaylight L3 + shell: > + sed -i "s/router,//g" /etc/neutron/neutron.conf; + sed -i "/^service_plugins/s/$/,networking_odl.l3.l3_odl.OpenDaylightL3RouterPlugin/" /etc/neutron/neutron.conf + + - name: Clean neutron ML2 database + shell: > + export db_connection=`sudo grep ^connection /etc/neutron/neutron.conf`; + export db_name=`echo $db_connection | rev | cut -d/ -f1 | rev | cut -d? -f1`; + sudo mysql -e "drop database if exists $db_name;"; + sudo mysql -e "create database $db_name character set utf8;"; + sudo mysql -e "grant all on $db_name.* to 'neutron'@'%';"; + sudo neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head diff --git a/playbooks/post-deploy/packstack/opendaylight/install_odl_driver.yml b/playbooks/post-deploy/packstack/opendaylight/install_odl_driver.yml new file mode 100644 index 000000000..3f6d06f55 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/install_odl_driver.yml @@ -0,0 +1,17 @@ +--- +- name: Attach ovs to an active opendaylight controller + hosts: controller + sudo: yes + tasks: + - name: Start openvswitch + service: name=openvswitch state=running + + - name: Attach ovs to opendaylight controller + command: ovs-vsctl set-manager tcp:{{ hostvars[provisioner.nodes.odl_controller.name].ansible_default_ipv4.address }}:6640 + +- name: Install opendaylight driver using rpm + hosts: controller + sudo: yes + tasks: + - name: Install opendaylight driver + yum: name=python-networking-odl state=latest diff --git a/playbooks/post-deploy/packstack/opendaylight/install_odl_rpm.yml b/playbooks/post-deploy/packstack/opendaylight/install_odl_rpm.yml new file mode 100644 index 000000000..34f52d62e --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/install_odl_rpm.yml @@ -0,0 +1,7 @@ +--- +- name: Install OpenDaylight distribution + hosts: odl_controller + sudo: yes + tasks: + - name: Install OpenDaylight distribution using an RPM + yum: name=opendaylight state=present diff --git a/playbooks/post-deploy/packstack/opendaylight/install_odl_source.yml b/playbooks/post-deploy/packstack/opendaylight/install_odl_source.yml new file mode 100644 index 000000000..9136f90a9 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/install_odl_source.yml @@ -0,0 +1,83 @@ +--- +- name: Prepare environment for building odl from source + hosts: odl_controller + sudo: yes + tasks: + - name: Create the COPR repos required for component tests + template: src=templates/component-test-copr-repo.j2 dest=/etc/yum.repos.d/component-test-copr.repo + + - name: Install rhpkg repo + command: "yum localinstall -y {{ distro.repo.rhpkg }}" + + - name: Install apache-maven repo + template: src=templates/epel-apache-maven.j2 dest=/etc/yum.repos.d/epel-apache-maven.repo + + - name: Install required RPMs for the build + yum: name="{{ item }}" state=present + with_items: + - mock + - git + - GitPython + - apache-maven + + - name: Install settings + template: src=templates/m2_settings.j2 dest=/usr/share/apache-maven/conf/settings.xml + + - name: Create mock configuration for the build + template: src=templates/mock_config.j2 dest=/etc/mock/rhos-{{ product.full_version }}-odl-rhel-{{ ansible_distribution_version|int }}-build.cfg + + - name: Add entries to hosts file + lineinfile: + dest="/etc/hosts" + insertafter=EOF + line="{{ item }}" + with_items: + - '127.1.0.1 nexus.opendaylight.org' + - '127.1.0.2 repo.maven.apache.org' + - '127.1.0.3 oss.sonatype.org' + - '127.1.0.4 registry.npmjs.org' + + - name: Clone opendayligt dist-git + git: repo='{{ odl.dist_git.url }}' + version='{{ odl.dist_git.branch }}' + dest='/home/{{ ansible_user}}/opendaylight' + accept_hostkey=true + + - name: Clone maven-chain-builder + git: repo=https://github.com/bregman-arie/maven-chain-builder.git + dest='/home/{{ ansible_user }}/maven-chain-builder' + accept_hostkey=true + + - name: Install PME + get_url: url={{ odl.pme.url }} dest=/usr/share/apache-maven/lib/ext + +- name: Build opendaylight + hosts: odl_controller + sudo: yes + tasks: + - name: Prepare chain file + args: + chdir: /home/{{ ansible_user}}/maven-chain-builder + shell: > + sudo sed -i "s/\$TAG_TO_BUILD/rhos-{{ product.full_version }}-patches/g" /home/{{ ansible_user }}/opendaylight/make-vars; + /home/{{ ansible_user }}/opendaylight/make-vars; + cp /home/{{ ansible_user}}/opendaylight/opendaylight-chain/opendaylight-chain.ini .; + cd /home/{{ ansible_user }}/opendaylight && git checkout -- make-vars && git checkout -- opendaylight-chain/opendaylight-chain.ini && cd -; + redhat_version=`cat /home/{{ ansible_user }}/opendaylight/*/*.ini | grep "redhat_version = " | cut -d= -f2 | xargs`; + sed -i "s/\%(redhat_version)s/$redhat_version/g" *.ini; + bomver=`cat /home/{{ ansible_user }}/opendaylight/*/*.ini | grep "bomversion = " | cut -d= -f2 | xargs`; + sed -i "s/\%\(bomversion\)s/$bomver/g" *.ini; + sed -i "s/skipTests/skipTests=true/g" *.ini; + sed -i "s/properties = /\n/g" *.ini + + - name: Run apache-chain-builder and build the opendaylight disturbution + args: + chdir: /home/{{ ansible_user}}/maven-chain-builder + shell: "python maven-chain-builder.py opendaylight-chain.ini {{ ansible_user }}" + +- name: Prepare opendaylight distribution for run + hosts: odl_controller + sudo: yes + tasks: + - name: Extract odl distribution to /opt/karaf + shell: "tar -zxf /tmp/org/opendaylight/ovsdb/karaf/*/*.tar.gz -C /opt && mv /opt/karaf* /opt/opendaylight" diff --git a/playbooks/post-deploy/packstack/opendaylight/install_odl_zip.yml b/playbooks/post-deploy/packstack/opendaylight/install_odl_zip.yml new file mode 100644 index 000000000..2b80237df --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/install_odl_zip.yml @@ -0,0 +1,28 @@ +--- +- name: Deploy OpenDaylight using zip file + hosts: odl_controller + sudo: yes + tasks: + - name: Download zip file + get_url: + url="{{ opendaylight.distribution.zip }}" + dest=/tmp/karaf.zip + + - name: Ensure unzip installed to extract OpenDaylight distribution + yum: + name=unzip + state=present + + - name: Ensure java installed to run OpenDaylight + yum: + name=java + state=present + + - name: Unzip OpenDaylight distribution + unarchive: + src=/tmp/karaf.zip + dest=/opt + copy=no + + - name: Rename directory to opendaylight + shell: mv /opt/karaf* /opt/opendaylight diff --git a/playbooks/post-deploy/packstack/opendaylight/main.yml b/playbooks/post-deploy/packstack/opendaylight/main.yml new file mode 100644 index 000000000..a955159f7 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/main.yml @@ -0,0 +1,7 @@ +--- +- include: "install_odl_{{ odl.install.type| default('rpm') }}.yml" +- include: start_odl.yml +- include: stop_services.yml +- include: install_odl_driver.yml +- include: configure_neutron.yml +- include: start_services.yml diff --git a/playbooks/post-deploy/packstack/opendaylight/start_odl.yml b/playbooks/post-deploy/packstack/opendaylight/start_odl.yml new file mode 100644 index 000000000..97d051a52 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/start_odl.yml @@ -0,0 +1,19 @@ +--- +- name: Start OpenDaylight distribution + hosts: odl_controller + sudo: yes + vars: + odl_controller_name: "{{ provisioner.nodes.odl_controller.name }}" + tasks: + - name: Enable traffic from OpenStack Controller to OpenDaylight node + shell: iptables -I INPUT -j ACCEPT -p tcp -s {{ hostvars[provisioner.nodes.controller.name].ansible_default_ipv4.address }} + + - name: Add L3 configuration + shell: > + echo "ovsdb.l3.fwd.enabled=yes" >> /opt/opendaylight/etc/custom.properties; + eth0_mac_address={{ hostvars[odl_controller_name]['ansible_eth0']['macaddress'] }}; + echo "ovsdb.l3gateway.mac=$eth0_mac_address" >> /opt/opendaylight/etc/custom.properties + + - name: Run controller + command: "sh /opt/opendaylight/bin/start" + async: 20 diff --git a/playbooks/post-deploy/packstack/opendaylight/start_services.yml b/playbooks/post-deploy/packstack/opendaylight/start_services.yml new file mode 100644 index 000000000..f56829718 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/start_services.yml @@ -0,0 +1,40 @@ +--- +- name: Start neutron-server + hosts: controller + sudo: yes + tasks: + - name: Start neutron-server service + service: name=neutron-server + state=running + + # Required for running tests + - name: Create an external network + quantum_network: + state: present + auth_url: "http://{{ hostvars[inventory_hostname].ansible_default_ipv4.address }}:35357/v2.0/" + login_username: admin + login_password: "{{ hostvars[inventory_hostname].admin_password | default('redhat') }}" + login_tenant_name: admin + name: "{{ installer.network.name }}" + provider_network_type: "{{ installer.network.external.provider_network_type }}" + provider_physical_network: "{{ installer.network.label }}" + provider_segmentation_id: "{{ installer.network.external.vlan.tag|default(omit) }}" + router_external: yes + shared: no + admin_state_up: yes + + - name: Create subnet for external network + quantum_subnet: + state: present + auth_url: "http://{{ hostvars[inventory_hostname].ansible_default_ipv4.address }}:35357/v2.0/" + login_username: admin + login_password: "{{ hostvars[inventory_hostname].admin_password | default('redhat') }}" + login_tenant_name: admin + tenant_name: admin + network_name: "{{ installer.network.name }}" + name: external-subnet + enable_dhcp: False + gateway_ip: "{{ provisioner.network.network_list.external.nested.subnet_gateway }}" + cidr: "{{ provisioner.network.network_list.external.nested.subnet_cidr}}" + allocation_pool_start: "{{ provisioner.network.network_list.external.nested.allocation_pool_start }}" + allocation_pool_end: "{{ provisioner.network.network_list.external.nested.allocation_pool_end }}" diff --git a/playbooks/post-deploy/packstack/opendaylight/stop_services.yml b/playbooks/post-deploy/packstack/opendaylight/stop_services.yml new file mode 100644 index 000000000..16726ab1c --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/stop_services.yml @@ -0,0 +1,26 @@ +--- +- name: Stop networking services + hosts: controller + sudo: yes + tasks: + - name: Stop neutron-server + service: name=neutron-server + state=stopped + + - name: Stop neutron-openvswitch-agent + service: name=neutron-openvswitch-agent + state=stopped + + - name: Stop openvswitch + service: name=openvswitch + state=stopped + +- name: Remove openvswitch logs and configuration + hosts: controller + sudo: yes + tasks: + - name: Remove openvswitch logs + command: "rm -rf /var/log/openvswitch/*" + + - name: Remove openvswitch configuration + command: "rm -rf /etc/openvswitch/conf.db" \ No newline at end of file diff --git a/roles/linux/rhel/rhos/component-test-copr-repo.j2 b/playbooks/post-deploy/packstack/opendaylight/templates/component-test-copr-repo.j2 similarity index 100% rename from roles/linux/rhel/rhos/component-test-copr-repo.j2 rename to playbooks/post-deploy/packstack/opendaylight/templates/component-test-copr-repo.j2 diff --git a/playbooks/post-deploy/packstack/opendaylight/templates/epel-apache-maven.j2 b/playbooks/post-deploy/packstack/opendaylight/templates/epel-apache-maven.j2 new file mode 100644 index 000000000..5bfa544fb --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/templates/epel-apache-maven.j2 @@ -0,0 +1,13 @@ +[epel-apache-maven] +name=maven from apache foundation. +baseurl=http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-$releasever/$basearch/ +enabled=1 +skip_if_unavailable=1 +gpgcheck=0 + +[epel-apache-maven-source] +name=maven from apache foundation. - Source +baseurl=http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-$releasever/SRPMS +enabled=0 +skip_if_unavailable=1 +gpgcheck=0 diff --git a/playbooks/post-deploy/packstack/opendaylight/templates/m2_settings.j2 b/playbooks/post-deploy/packstack/opendaylight/templates/m2_settings.j2 new file mode 100644 index 000000000..a9b28a6b0 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/templates/m2_settings.j2 @@ -0,0 +1,43 @@ + + + + + opendaylight-release + + + opendaylight-mirror + opendaylight-mirror + {{ private.distro.rhel.download_server }}/brewroot/repos/rhos-{{ product.full_version }}-odl-rhel-{{ ansible_distribution_version|int }}-build/latest/maven/ + + true + never + + + false + + + + + + opendaylight-mirror + opendaylight-mirror + {{ private.distro.rhel.download_server }}/brewroot/repos/rhos-{{ product.full_version }}-odl-rhel-{{ ansible_distribution_version|int }}-build/latest/maven/ + + true + never + + + false + + + + + + + + + opendaylight-release + + diff --git a/playbooks/post-deploy/packstack/opendaylight/templates/mock_config.j2 b/playbooks/post-deploy/packstack/opendaylight/templates/mock_config.j2 new file mode 100644 index 000000000..a2991d5b0 --- /dev/null +++ b/playbooks/post-deploy/packstack/opendaylight/templates/mock_config.j2 @@ -0,0 +1,20 @@ +config_opts['chroothome'] = '/builddir' +config_opts['use_host_resolv'] = True +config_opts['basedir'] = '/var/lib/mock' +config_opts['rpmbuild_timeout'] = 86400 +config_opts['yum.conf'] = '[main]\ncachedir=/var/cache/yum\ndebuglevel=9\nlogfile=/var/log/yum.log\nreposdir=/dev/null\nretries=20\nobsoletes=1\ngpgcheck=0\nassumeyes=1\n\n# repos\n\n[build]\nname=build\nbaseurl={{ private.distro.rhel.download_server }}/brewroot/repos/rhos-{{ product.full_version }}-odl-rhel-{{ ansible_distribution_version|int }}-build/latest/x86_64\n' +config_opts['chroot_setup_cmd'] = 'groupinstall maven-build' +config_opts['target_arch'] = 'x86_64' +config_opts['root'] = 'rhos-{{ product.full_version }}-odl-rhel-{{ ansible_distribution_version|int }}-build' + +config_opts['plugin_conf']['root_cache_enable'] = False +config_opts['plugin_conf']['yum_cache_enable'] = False +config_opts['plugin_conf']['ccache_enable'] = False + +config_opts['macros']['%_host'] = 'x86_64-koji-linux-gnu' +config_opts['macros']['%_host_cpu'] = 'x86_64' +config_opts['macros']['%vendor'] = 'Koji' +config_opts['macros']['%distribution'] = 'Koji Testing' +config_opts['macros']['%_topdir'] = '/builddir/build' +config_opts['macros']['%_rpmfilename'] = '%%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm' +config_opts['macros']['%packager'] = 'Koji' diff --git a/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-7-director-rdo-manager b/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-7-director-rdo-manager index c4dc18db0..fc25f71c2 100644 --- a/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-7-director-rdo-manager +++ b/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-7-director-rdo-manager @@ -1,32 +1,4 @@ -# rhbz1253709 --tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_create_root_certificate --tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_get_root_certificate -# rhbz1253765 --tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_index --tempest.api.object_storage.test_container_staticweb.StaticWebTest --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_delete_large_object --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_list_large_object_metadata --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_retrieve_large_object --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_upload_manifest --tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container --tempest.api.orchestration.stacks.test_swift_resources.SwiftResourcesTestJSON.test_acl --tempest.api.orchestration.stacks.test_swift_resources.SwiftResourcesTestJSON.test_metadata -# rhbz1254938 --tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete --tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete --tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_volume_from_snapshot --tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_volume_from_snapshot # rhbz1266947 -tempest.api.identity.admin.v3 -# rhbz1274308 --tempest.api.object_storage.test_container_services.ContainerTest.test_create_container --tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata -# rhbz1240816 --tempest.scenario.test_volume_boot_pattern -# rhbz1295556 --tempest.api.volume.test_volumes_get -# rhbz1295561 +# rhbz1284845 -tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image -# rhbz1295565 --tempest.api.network.test_ports.PortsTestJSON.test_create_port_in_allowed_allocation_pools --tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_in_allowed_allocation_pools diff --git a/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-8-director-rdo-manager b/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-8-director-rdo-manager index c4dc18db0..64787dfca 100644 --- a/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-8-director-rdo-manager +++ b/playbooks/post-deploy/rdo-manager/files/tempest_skip/rdoci-rhos-8-director-rdo-manager @@ -1,32 +1,16 @@ -# rhbz1253709 --tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_create_root_certificate --tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_get_root_certificate -# rhbz1253765 --tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_index --tempest.api.object_storage.test_container_staticweb.StaticWebTest --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_delete_large_object --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_list_large_object_metadata --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_retrieve_large_object --tempest.api.object_storage.test_object_slo.ObjectSloTest.test_upload_manifest --tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container --tempest.api.orchestration.stacks.test_swift_resources.SwiftResourcesTestJSON.test_acl --tempest.api.orchestration.stacks.test_swift_resources.SwiftResourcesTestJSON.test_metadata -# rhbz1254938 --tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete --tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete --tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_volume_from_snapshot --tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_volume_from_snapshot # rhbz1266947 -tempest.api.identity.admin.v3 -# rhbz1274308 --tempest.api.object_storage.test_container_services.ContainerTest.test_create_container --tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata -# rhbz1240816 --tempest.scenario.test_volume_boot_pattern -# rhbz1295556 --tempest.api.volume.test_volumes_get -# rhbz1295561 +-tempest.api.identity.v3.test_api_discovery +# rhbz1284845 -tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image -# rhbz1295565 --tempest.api.network.test_ports.PortsTestJSON.test_create_port_in_allowed_allocation_pools --tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_in_allowed_allocation_pools +# rhbz1304930 +-tempest.api.compute.servers.test_create_server +-tempest.api.compute.servers.test_server_addresses +-tempest.api.compute.servers.test_server_actions +-tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON +-tempest.scenario.test_network_basic_ops.TestNetworkBasicOps +-tempest.scenario.test_server_basic_ops.TestServerBasicOps +-tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern +-tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2 +# rhbz1304933 +-tempest.api.telemetry.test_telemetry_notification_api diff --git a/playbooks/post-deploy/rdo-manager/overcloud-test.yml b/playbooks/post-deploy/rdo-manager/overcloud-test.yml index 9f0b6c4a7..d94beca2f 100644 --- a/playbooks/post-deploy/rdo-manager/overcloud-test.yml +++ b/playbooks/post-deploy/rdo-manager/overcloud-test.yml @@ -56,7 +56,8 @@ identity.admin_password $OS_PASSWORD \ network.tenant_network_cidr 192.168.0.0/24 \ object-storage.operator_role swiftoperator \ - orchestration.stack_owner_role heat_stack_owner + orchestration.stack_owner_role heat_stack_owner \ + validation.ping_timeout 300 when: installer.tempest.test_regex is defined and installer.tempest.test_regex != "tempest\.scenario\.test_minimum_basic" diff --git a/playbooks/post-deploy/rdo-manager/updates/update-overcloud.yml b/playbooks/post-deploy/rdo-manager/updates/update-overcloud.yml index 27f5f7238..c3d093ec4 100644 --- a/playbooks/post-deploy/rdo-manager/updates/update-overcloud.yml +++ b/playbooks/post-deploy/rdo-manager/updates/update-overcloud.yml @@ -27,7 +27,7 @@ hosts: update:!undercloud tasks: - name: dump package list - shell: rpm -qa &> {{ ansible_ssh_host }}-rpm.log + shell: rpm -qa &> {{ ansible_host }}-rpm.log - name: copy 55-heat-config file to node BZ 1278181 sudo: yes @@ -121,15 +121,15 @@ hosts: update:!undercloud tasks: - name: dump package list - shell: rpm -qa &> {{ ansible_ssh_host }}-rpm-updated.log + shell: rpm -qa &> {{ ansible_host }}-rpm-updated.log - name: get rpm list stat register: rpm_list_result - stat: path=~/{{ ansible_ssh_host }}-rpm.log + stat: path=~/{{ ansible_host }}-rpm.log - name: get rpm updated stat register: rpm_list_updated_result - stat: path=~/{{ ansible_ssh_host }}-rpm-updated.log + stat: path=~/{{ ansible_host }}-rpm-updated.log - name: fail when rpm list checksum are equal fail: msg="Failed, no package has been updated..." diff --git a/playbooks/post-deploy/rdo-manager/updates/update-undercloud.yml b/playbooks/post-deploy/rdo-manager/updates/update-undercloud.yml index 1d3799d95..4ce22bee4 100644 --- a/playbooks/post-deploy/rdo-manager/updates/update-undercloud.yml +++ b/playbooks/post-deploy/rdo-manager/updates/update-undercloud.yml @@ -35,15 +35,15 @@ when: not yum_update_result.changed|bool - name: reboot host - local_action: - wait_for_ssh - reboot_first=true - host="{{ ansible_ssh_host }}" - user="stack" - ssh_opts="-F {{ base_dir }}/khaleesi/ssh.config.ansible" - key="{{ ansible_ssh_private_key_file }}" - timeout=900 - sudo=true + delegate_to: localhost + wait_for_ssh: + reboot_first: true + host: "{{ ansible_host }}" + user: stack + ssh_opts: "-F {{ base_dir }}/khaleesi/ssh.config.ansible" + key: "{{ ansible_ssh_private_key_file }}" + timeout: 900 + sudo: true - name: create vlan10 if doesn't exist ignore_errors: yes diff --git a/playbooks/provisioner/beaker/main.yml b/playbooks/provisioner/beaker/main.yml index 75e00e351..f5578d936 100644 --- a/playbooks/provisioner/beaker/main.yml +++ b/playbooks/provisioner/beaker/main.yml @@ -6,20 +6,17 @@ - name: Group by provisioner type group_by: key={{ provisioner.type }} - - name: Group for skipping the provisioning step - group_by: key={{ provisioner.skip }} - - name: Add the host to the inventory add_host: name="host0" groups="provisioned" ansible_fqdn="{{ lookup('env', 'BEAKER_MACHINE') }}" - ansible_ssh_user="{{ provisioner.remote_user }}" + ansible_user="{{ provisioner.remote_user }}" ansible_ssh_private_key_file="{{ provisioner.key_file }}" - ansible_ssh_host="{{ lookup('env', 'BEAKER_MACHINE') }}" + ansible_host="{{ lookup('env', 'BEAKER_MACHINE') }}" - name: Use beaker to provision the machine - hosts: localhost:!skip_provision + hosts: localhost tasks: - name: Check if beakerCheckOut.sh script exists stat: path="{{base_dir}}/khaleesi-settings/beakerCheckOut.sh" diff --git a/playbooks/provisioner/centosci/main.yml b/playbooks/provisioner/centosci/main.yml index c9a81045c..e9971edce 100644 --- a/playbooks/provisioner/centosci/main.yml +++ b/playbooks/provisioner/centosci/main.yml @@ -29,9 +29,9 @@ if item.item.value.groups is string else item.item.value.groups| join(',') }}" ansible_fqdn="{{ item.hosts.0.hostname }}" - ansible_ssh_user="{{ provisioner.remote_user }}" + ansible_user="{{ provisioner.remote_user }}" ansible_ssh_private_key_file="{{ provisioner.key_file }}" - ansible_ssh_host="{{ item.hosts.0.hostname }}" + ansible_host="{{ item.hosts.0.hostname }}" with_items: provisioned_nodes.results - name: wait for hosts to get reachable @@ -39,6 +39,9 @@ gather_facts: no max_fail_percentage: 0 tasks: - - local_action: - module: wait_for_ssh host={{ hostvars[inventory_hostname].ansible_ssh_host }} user=root key={{ hostvars[inventory_hostname].ansible_ssh_private_key_file }} - sudo: no + delegate_to: localhost + wait_for_ssh: + host: "{{ hostvars[inventory_hostname].ansible_host }}" + user: root + key: "{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}" + sudo: no diff --git a/playbooks/provisioner/foreman/main.yml b/playbooks/provisioner/foreman/main.yml index 5d9c87f96..6662d0425 100644 --- a/playbooks/provisioner/foreman/main.yml +++ b/playbooks/provisioner/foreman/main.yml @@ -5,16 +5,14 @@ tasks: - name: Add candidate hosts to host list add_host: - name="{{ item.value.name }}" - groups="{{ item.value.groups - if item.value.groups is string - else item.value.groups| join(',') }}" - rebuild="{{ item.value.rebuild|lower}}" - node_label="{{ item.key }}" - ansible_fqdn="{{ item.value.fqdn }}" - ansible_ssh_user="{{ item.value.remote_user }}" - ansible_ssh_host="{{ item.value.fqdn }}" - ansible_ssh_private_key_file="{{ provisioner.key_file }}" + name: "{{ item.value.name }}" + groups: "{{ item.value.groups if item.value.groups is string else item.value.groups| join(',') }}" + rebuild: "{{ item.value.rebuild|lower}}" + node_label: "{{ item.key }}" + ansible_fqdn: "{{ item.value.fqdn }}" + ansible_user: "{{ item.value.remote_user }}" + ansible_host: "{{ item.value.fqdn }}" + ansible_ssh_private_key_file: "{{ provisioner.key_file }}" with_dict: provisioner.nodes - name: Rebuild nodes - Foreman @@ -26,7 +24,7 @@ auth_url: "{{ provisioner.foreman.auth_url }}" username: "{{ provisioner.foreman.username }}" password: "{{ provisioner.foreman.password }}" - host_id: "{{ ansible_ssh_host }}" + host_id: "{{ hostvars[inventory_hostname].ansible_host }}" rebuild: "{{ rebuild }}" wait_for_host: "{{ provisioner.foreman.wait_for_host|lower }}" retries: 4 @@ -34,41 +32,41 @@ register: created_nodes - name: Wait for hosts to get reachable (after rebuild) - local_action: - wait_for_ssh - user="root" - host={{ hostvars[inventory_hostname].ansible_ssh_host }} - key={{ hostvars[inventory_hostname].ansible_ssh_private_key_file }} + delegate_to: localhost + wait_for_ssh: + user: "root" + host: "{{ hostvars[inventory_hostname].ansible_host }}" + key: "{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}" - name: Check and Enable virtualization support hosts: openstack_nodes:virthost gather_facts: no vars: - - ansible_ssh_user: root + - ansible_user: root tasks: - name: Check if CPU supports INTEL based KVM shell: egrep -c 'vmx' /proc/cpuinfo - ignore_errors: True + ignore_errors: true register: kvm_intel - name: Check if CPU supports AMD based KVM shell: egrep -c 'svm' /proc/cpuinfo - ignore_errors: True + ignore_errors: true register: kvm_amd - name: Enable KVM modules modprobe: name=kvm - ignore_errors: True + ignore_errors: true when: kvm_intel.rc == 0 or kvm_amd.rc == 0 - name: Enable Intel KVM module modprobe: name=kvm_intel - ignore_errors: True + ignore_errors: true when: kvm_intel.rc == 0 - name: Enable AMD KVM module modprobe: name=kvm_amd - ignore_errors: True + ignore_errors: true when: kvm_amd.rc == 0 - name: Install required QEMU-KVM packages @@ -91,8 +89,13 @@ provisioner.nodes[node_label].network.interfaces register: update_ifcfgs - - local_action: - module: wait_for_ssh reboot_first=true host={{ hostvars[inventory_hostname].ansible_ssh_host }} user={{ hostvars[inventory_hostname].ansible_ssh_user }} key={{ hostvars[inventory_hostname].ansible_ssh_private_key_file }} + - name: reboot and wait for ssh + delegate_to: localhost + wait_for_ssh: + reboot_first: true + host: "{{ hostvars[inventory_hostname].ansible_host }}" + user: "{{ hostvars[inventory_hostname].ansible_user }}" + key: "{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}" when: update_ifcfgs|changed sudo: no diff --git a/playbooks/provisioner/manual/main.yml b/playbooks/provisioner/manual/main.yml index 9b974fd8e..796ef33e1 100644 --- a/playbooks/provisioner/manual/main.yml +++ b/playbooks/provisioner/manual/main.yml @@ -10,14 +10,14 @@ with_dict: provisioner.nodes - name: Add the host to the inventory - when: installer.type in ['rdo-manager'] + when: installer.type in ['rdo-manager', 'project'] add_host: name="{{ item.value.name }}" groups="{{ item.value.groups if item.value.groups is string else item.value.groups| join(',') }}" ansible_fqdn="{{ item.value.hostname }}" - ansible_ssh_user="{{ item.value.remote_user }}" + ansible_user="{{ item.value.remote_user }}" ansible_ssh_private_key_file="{{ provisioner.key_file }}" - ansible_ssh_host="{{ item.value.hostname }}" + ansible_host="{{ item.value.hostname }}" with_dict: provisioner.nodes diff --git a/playbooks/provisioner/openstack/cleanup.yml b/playbooks/provisioner/openstack/cleanup.yml index 3e0e0d527..4fa2ecdfd 100644 --- a/playbooks/provisioner/openstack/cleanup.yml +++ b/playbooks/provisioner/openstack/cleanup.yml @@ -7,8 +7,16 @@ - group_by: key=net_prov when: provisioner.network.dynamic_net is defined and provisioner.network.dynamic_net +- name: Check the nodes which need a floating IP from a specific network + hosts: localhost + gather_facts: no + sudo: no + tasks: + - group_by: key=net_add_floatingip + when: provisioner.network.public_net_name is defined + - name: Cleanup Networks - hosts: net_prov + hosts: net_add_floatingip gather_facts: no tasks: - name: Delete Floating IPs @@ -28,61 +36,39 @@ gather_facts: no tasks: - name: Delete created nodes - nova_compute: - auth_url: "{{ provisioner.auth_url }}" - state: absent - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - name: "{{ item.value.name }}" + os_server: + state: absent + auth: + auth_url: "{{ provisioner.auth_url }}" + username: "{{ provisioner.username }}" + password: "{{ provisioner.password }}" + project_name: "{{ provisioner.tenant_name }}" + name: "{{ item.value.name }}" # wait for deletion until we can delete flaoting ips explicitly. - wait: "yes" + wait: yes with_dict: provisioner.nodes - name: Cleanup Networks hosts: net_prov gather_facts: no tasks: - - name: Detach network interfaces from the router - quantum_router_interface: - auth_url: "{{ provisioner.auth_url }}" - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - state: absent - router_name: "{{ provisioner.network.router.name }}" - subnet_name: "{{ item }}" - with_items: - - "{{ provisioner['network']['network_list']['management']['subnet_name'] }}" - - "{{ provisioner['network']['network_list']['external']['subnet_name'] }}" - - - name: Unset gateway for router - quantum_router_gateway: - auth_url: "{{ provisioner.auth_url }}" - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - router_name: "{{ provisioner.network.router.name }}" - state: absent - - name: Delete created router - quantum_router: - auth_url: "{{ provisioner.auth_url }}" - state: absent - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - name: "{{ provisioner.network.router.name }}" + os_router: + state: absent + auth: + auth_url: "{{ provisioner.auth_url }}" + username: "{{ provisioner.username }}" + password: "{{ provisioner.password }}" + project_name: "{{ provisioner.tenant_name }}" + name: "{{ provisioner.network.router.name }}" - name: Delete created networks - quantum_network: - auth_url: "{{ provisioner.auth_url }}" - state: absent - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - name: "{{ item }}" - with_items: - - "{{ provisioner.network.network_list.management.name }}" - - "{{ provisioner.network.network_list.data.name }}" - - "{{ provisioner.network.network_list.external.name }}" + os_network: + state: absent + auth: + auth_url: "{{ provisioner.auth_url }}" + username: "{{ provisioner.username }}" + password: "{{ provisioner.password }}" + project_name: "{{ provisioner.tenant_name }}" + name: "{{ item }}" + with_items: provisioner.network.network_list.values()|map(attribute='name')|list diff --git a/playbooks/provisioner/openstack/main.yml b/playbooks/provisioner/openstack/main.yml index db370f15f..97ad5b3a2 100644 --- a/playbooks/provisioner/openstack/main.yml +++ b/playbooks/provisioner/openstack/main.yml @@ -7,75 +7,70 @@ - group_by: key=net_prov when: provisioner.network.dynamic_net is defined and provisioner.network.dynamic_net +- name: Check the nodes which need a floating IP from a specific network + hosts: localhost + gather_facts: no + sudo: no + tasks: + - group_by: key=net_add_floatingip + when: provisioner.network.public_net_name is defined - name: Create networks hosts: net_prov gather_facts: no tasks: - name: Create networks - quantum_network: - auth_url: "{{ provisioner.auth_url }}" - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - name: "{{ item.value.name }}" + os_network: + state: present + auth: + auth_url: "{{ provisioner.auth_url }}" + username: "{{ provisioner.username }}" + password: "{{ provisioner.password }}" + project_name: "{{ provisioner.tenant_name }}" + name: "{{ item }}" register: "networks" - with_dict: "{{ provisioner.network.network_list }}" + with_items: "{{ provisioner.network.network_list.values()|map(attribute='name')|list }}" - name: Create subnets hosts: net_prov gather_facts: no tasks: - name: Create subnet for each network - quantum_subnet: - auth_url: "{{ provisioner.auth_url }}" - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - name: "{{ item.value.subnet_name }}" - cidr: "{{ item.value.cidr }}" - network_name: "{{ item.value.name }}" - enable_dhcp: "{{ item.value.enable_dhcp | default('True') }}" -# dns_nameservers: "{{ item.value.dns_nameservers | join(',') | default('null') }}" - dns_nameservers: "{{ item.value.dns_nameservers.first_dns | default(omit) }}" - allocation_pool_start: "{{ item.value.allocation_pool_start | default(omit) }}" - allocation_pool_end: "{{ item.value.allocation_pool_end | default(omit) }}" + os_subnet: + auth: + auth_url: "{{ provisioner.auth_url }}" + username: "{{ provisioner.username }}" + password: "{{ provisioner.password }}" + project_name: "{{ provisioner.tenant_name }}" + name: "{{ item.subnet_name }}" + cidr: "{{ item.cidr }}" + network_name: "{{ item.name }}" + enable_dhcp: "{{ item.enable_dhcp | default('True') }}" +# dns_nameservers: "{{ item.dns_nameservers | join(',') | default('null') }}" + dns_nameservers: "{{ item.dns_nameservers.values() | default(omit) }}" + allocation_pool_start: "{{ item.allocation_pool_start | default(omit) }}" + allocation_pool_end: "{{ item.allocation_pool_end | default(omit) }}" register: "subnets" - with_dict: "{{ provisioner.network.network_list }}" + with_items: provisioner.network.network_list.values() + - name: Create and configure router hosts: net_prov tasks: - name: Create router - quantum_router: - auth_url: "{{ provisioner.auth_url }}" - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" + os_router: + auth: + auth_url: "{{ provisioner.auth_url }}" + username: "{{ provisioner.username }}" + password: "{{ provisioner.password }}" + project_name: "{{ provisioner.tenant_name }}" name: "{{ provisioner.network.router.name }}" + network: "{{ provisioner.network.public_net_name }}" + interfaces: + - "{{ provisioner['network']['network_list']['external']['subnet_name'] }}" + - "{{ provisioner['network']['network_list']['management']['subnet_name'] }}" register: router - - name: Attach external interface to the router - quantum_router_interface: - auth_url: "{{ provisioner.auth_url }}" - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - router_name: "{{ provisioner.network.router.name }}" - subnet_name: "{{ item }}" - with_items: - - "{{ provisioner['network']['network_list']['external']['subnet_name'] }}" - - "{{ provisioner['network']['network_list']['management']['subnet_name'] }}" - - - name: Set gateway for router - quantum_router_gateway: - auth_url: "{{ provisioner.auth_url }}" - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - network_name: "{{ provisioner.network.public_net_name }}" - router_name: "{{ provisioner.network.router.name }}" - - name: Create nodes - OpenStack hosts: localhost gather_facts: no @@ -87,25 +82,25 @@ results: "{{ provisioner.network.network_list.values() }}" - name: Create nodes - nova_compute: - auth_url: "{{ provisioner.auth_url }}" - state: present - login_username: "{{ provisioner.username }}" - login_password: "{{ provisioner.password }}" - login_tenant_name: "{{ provisioner.tenant_name }}" - name: "{{ item.value.name }}" - image_id: "{{ item.value.image_id }}" - key_name: "{{ provisioner.key_name }}" - flavor_id: "{{ item.value.flavor_id }}" - nics: - - net-id: "{{ networks.results.0.id }}" - - net-id: "{{ networks.results.1.id }}" - - net-id: "{{ networks.results.2.id }}" - config_drive: True - auto_floating_ip: "{{ provisioner.network.use_floating_ip | default(omit) }}" - wait_for: 800 - # our library/nova_compute will retry booting new servers - # in case of errors, until it reaches 'wait_for' seconds timelimit + os_server: + state: present + auth: + auth_url: "{{ provisioner.auth_url }}" + username: "{{ provisioner.username }}" + password: "{{ provisioner.password }}" + project_name: "{{ provisioner.tenant_name }}" + name: "{{ item.value.name }}" + image: "{{ item.value.image_id }}" + key_name: "{{ provisioner.key_name }}" + flavor: "{{ item.value.flavor_id }}" + nics: + - net-id: "{{ networks.results.0.id }}" + - net-id: "{{ networks.results.1.id }}" + - net-id: "{{ networks.results.2.id }}" + config_drive: True + auto_floating_ip: "{{ provisioner.network.use_floating_ip | default(false) }}" + timeout: 180 + wait: yes with_dict: provisioner.nodes register: created_nodes @@ -114,14 +109,14 @@ name: "{{ item.item.value.name }}" groups: "{{ item.item.value.groups if item.item.value.groups is string else item.item.value.groups| join(',') }}" ansible_fqdn: "{{ item.item.value.hostname }}" - ansible_ssh_user: "{{ item.item.value.remote_user }}" + ansible_user: "{{ item.item.value.remote_user }}" ansible_ssh_private_key_file: "{{ provisioner.key_file }}" - ansible_ssh_host: "{%- if item.public_ip %}{{ item.public_ip }}{%- else %}{{ item.info.addresses[provisioner.network.network_list.management.name][0].addr }}{% endif %}" - eth1_interface_ip: "{{ item.info.addresses[provisioner.network.network_list.data.name][0].addr }}" + ansible_host: "{%- if item.interface_ip is defined %}{{ item.interface_ip }}{%- else %}{{ item.openstack.addresses[provisioner.network.network_list.management.name][0].addr }}{% endif %}" + eth1_interface_ip: "{{ item.openstack.addresses[provisioner.network.network_list.data.name][0].addr }}" with_items: created_nodes.results - name: Add Floating IPs - hosts: net_prov + hosts: net_add_floatingip tasks: - name: assign floating ip to instances quantum_floating_ip: @@ -138,7 +133,7 @@ - name: Add Neutron Floating IPs to host list add_host: name: "{{ item.item.value.name }}" - ansible_ssh_host: "{{ item.public_ip }}" + ansible_host: "{{ item.public_ip }}" with_items: floatingip.results when: floatingip @@ -146,16 +141,23 @@ hosts: openstack_nodes gather_facts: no max_fail_percentage: 0 + sudo: no tasks: - name: Wait for Reachable Nodes - wait_for_ssh: - host: "{{ hostvars[inventory_hostname].ansible_ssh_host }}" - user: "{{ hostvars[inventory_hostname].ansible_ssh_user }}" - key: "{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}" - timeout: "{{ provisioner.ssh_timeout | default(omit) }}" - sudo: no + wait_for: + host: "{{ ansible_host }}" + port: 22 + search_regex: OpenSSH + timeout: 600 delegate_to: localhost +- name: Ensure hostname is configured properly + hosts: openstack_nodes + gather_facts: yes + sudo: yes + roles: + - system/set_hostname + - name: Update network interfaces on nodes - OpenStack hosts: openstack_nodes gather_facts: yes @@ -184,7 +186,13 @@ line: "IPADDR={{ hostvars[inventory_hostname].eth1_interface_ip }}" register: update_ifcfg1 - - local_action: - module: wait_for_ssh reboot_first=true host={{ hostvars[inventory_hostname].ansible_ssh_host }} user={{ hostvars[inventory_hostname].ansible_ssh_user }} key={{ hostvars[inventory_hostname].ansible_ssh_private_key_file }} + - name: reboot and wait for ssh when: update_ifcfgs|changed or update_ifcfg1|changed + delegate_to: localhost sudo: no + wait_for_ssh: + reboot_first: "true" + # delegate_to changes the context for ansible_vars + host: "{{ hostvars[inventory_hostname].ansible_host }}" + user: "{{ hostvars[inventory_hostname].ansible_user }}" + key: "{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}" diff --git a/playbooks/provisioner/openstack_virtual_baremetal/main.yml b/playbooks/provisioner/openstack_virtual_baremetal/main.yml index ce7967c09..6154424e4 100644 --- a/playbooks/provisioner/openstack_virtual_baremetal/main.yml +++ b/playbooks/provisioner/openstack_virtual_baremetal/main.yml @@ -14,9 +14,9 @@ name="host0" groups="provisioned" ansible_fqdn="{{ lookup('env', 'TEST_MACHINE') }}" - ansible_ssh_user="{{ provisioner.remote_user }}" + ansible_user="{{ provisioner.remote_user }}" ansible_ssh_private_key_file="{{ provisioner.key_file }}" - ansible_ssh_host="{{ lookup('env', 'TEST_MACHINE') }}" + ansible_host="{{ lookup('env', 'TEST_MACHINE') }}" - name: set up host cloud environment hosts: host0 @@ -406,7 +406,7 @@ fetch: src=~/ssh.config.ansible dest={{ base_dir }}/khaleesi/ssh.config.ansible flat=yes - name: change mod for ssh.config.ansible - local_action: shell chmod 755 {{ base_dir }}/khaleesi/ssh.config.ansible + shell: chmod 755 {{ base_dir }}/khaleesi/ssh.config.ansible - name: copy id_rsa key back to the slave fetch: src=~/.ssh/id_rsa dest={{ base_dir }}/khaleesi/id_rsa_undercloud_instance flat=yes diff --git a/playbooks/provisioner/templates/hosts.j2 b/playbooks/provisioner/templates/hosts.j2 index ba9dca649..67ec626f6 100644 --- a/playbooks/provisioner/templates/hosts.j2 +++ b/playbooks/provisioner/templates/hosts.j2 @@ -1,5 +1,5 @@ {% for host in groups['all'] %} {% if hostvars[host].get('ansible_connection', '') != 'local' %} -{{ hostvars[host]['ansible_ssh_host'] }} {{ host }} {{ host }}{{ provisioner.network.domain }} +{{ hostvars[host]['ansible_host'] }} {{ host }} {{ host }}{{ provisioner.network.domain }} {% endif %} {% endfor %} diff --git a/playbooks/provisioner/templates/inventory.j2 b/playbooks/provisioner/templates/inventory.j2 index 014bd2daa..c7bee727c 100644 --- a/playbooks/provisioner/templates/inventory.j2 +++ b/playbooks/provisioner/templates/inventory.j2 @@ -2,9 +2,9 @@ {% if hostvars[host].get('ansible_connection', '') == 'local' %} {{ host }} ansible_connection=local {% elif hostvars[host]['ansible_ssh_private_key_file'] is defined %} -{{ host }} ansible_ssh_host={{ hostvars[host]['ansible_ssh_host'] }} ansible_ssh_user={{ hostvars[host]['ansible_ssh_user'] }} ansible_ssh_private_key_file={{ hostvars[host]['ansible_ssh_private_key_file'] }} +{{ host }} ansible_host={{ hostvars[host]['ansible_host'] }} ansible_user={{ hostvars[host]['ansible_user'] }} ansible_ssh_private_key_file={{ hostvars[host]['ansible_ssh_private_key_file'] }} {% else %} -{{ host }} ansible_ssh_host={{ hostvars[host]['ansible_ssh_host'] }} ansible_ssh_user={{ hostvars[host]['ansible_ssh_user'] }} ansible_ssh_password={{ hostvars[host]['ansible_ssh_password'] }} +{{ host }} ansible_host={{ hostvars[host]['ansible_host'] }} ansible_user={{ hostvars[host]['ansible_user'] }} ansible_ssh_password={{ hostvars[host]['ansible_ssh_password'] }} {% endif %} {% endfor %} diff --git a/playbooks/provisioner/virsh/cleanup.yml b/playbooks/provisioner/virsh/cleanup.yml index 67b8a4182..986d3bca2 100644 --- a/playbooks/provisioner/virsh/cleanup.yml +++ b/playbooks/provisioner/virsh/cleanup.yml @@ -8,8 +8,8 @@ name="{{ item.value.name }}" groups="{{ item.value.groups| join(',') }}" node_label="{{ item.key }}" - ansible_ssh_user="{{ item.value.ssh_user }}" - ansible_ssh_host="{{ item.value.ssh_host }}" + ansible_user="{{ item.value.ssh_user }}" + ansible_host="{{ item.value.ssh_host }}" ansible_ssh_private_key_file="{{ item.value.ssh_key_file }}" with_dict: provisioner.hosts diff --git a/playbooks/provisioner/virsh/main.yml b/playbooks/provisioner/virsh/main.yml index bfbe9cb35..f6cc2dfe2 100644 --- a/playbooks/provisioner/virsh/main.yml +++ b/playbooks/provisioner/virsh/main.yml @@ -8,8 +8,8 @@ name="{{ item.value.name }}" groups="{{ item.value.groups| join(',') }}" node_label="{{ item.key }}" - ansible_ssh_user="{{ item.value.ssh_user }}" - ansible_ssh_host="{{ item.value.ssh_host }}" + ansible_user="{{ item.value.ssh_user }}" + ansible_host="{{ item.value.ssh_host }}" ansible_ssh_private_key_file="{{ item.value.ssh_key_file }}" with_dict: provisioner.hosts @@ -33,11 +33,13 @@ - name: Check if virtualization is supported hosts: virthost gather_facts: no + vars: + - ansible_user: root sudo: yes tasks: - name: check if CPU supports INTEL based KVM shell: egrep -c 'vmx' /proc/cpuinfo - ignore_errors: True + ignore_errors: true register: kvm_intel - name: set fact for Intel based KVM @@ -47,7 +49,7 @@ - name: check if CPU supports AMD based KVM shell: egrep -c 'svm' /proc/cpuinfo - ignore_errors: True + ignore_errors: true register: kvm_amd - name: set fact for AMD based KVM @@ -58,6 +60,8 @@ - name: Enable KVM for intel hosts: virthost gather_facts: no + vars: + - ansible_user: root sudo: yes tasks: - name: enable nested KVM support for Intel @@ -81,14 +85,14 @@ modprobe: name: "kvm_{{ kvm_base }}" state: absent - ignore_errors: True + ignore_errors: true when: kvm_base is defined - name: load KVM module modprobe: name: "kvm_{{ kvm_base }}" state: present - ignore_errors: True + ignore_errors: true when: kvm_base is defined - name: install required QEMU-KVM packages @@ -100,14 +104,14 @@ modprobe: name: "vhost-net" state: absent - ignore_errors: True + ignore_errors: true when: kvm_base is defined - name: load KVM module modprobe: name: "vhost-net" state: present - ignore_errors: True + ignore_errors: true - name: Validate virtualization supported on host hosts: virthost @@ -238,9 +242,9 @@ add_host: name="{{ item.item.item[0] }}" groups="{{ provisioner.nodes['%s' % item.item.item[0].rstrip('1234567890')].groups | join(',') }}" - ansible_ssh_user="root" + ansible_user="root" ansible_ssh_password="redhat" - ansible_ssh_host="{{ item.stdout }}" + ansible_host="{{ item.stdout }}" when: item.item is defined and item.item.item[1] == "management" with_items: vm_ip_list @@ -290,7 +294,7 @@ - name: update the ssh host name of each machine add_host: name="{{ item }}" - ansible_ssh_host="{{ item }}" + ansible_host="{{ item }}" with_items: groups['openstack_nodes'] - name: update ansible with the new SSH settings diff --git a/playbooks/provisioner/virsh/templates/ssh.config.ansible.j2 b/playbooks/provisioner/virsh/templates/ssh.config.ansible.j2 index b33ff06b5..590692968 100644 --- a/playbooks/provisioner/virsh/templates/ssh.config.ansible.j2 +++ b/playbooks/provisioner/virsh/templates/ssh.config.ansible.j2 @@ -2,7 +2,7 @@ {% if hostvars[host].get('ansible_connection', '') != 'local' and host != 'virthost' %} Host {{ host }} ProxyCommand ssh -i {{ provisioner.hosts.host1.ssh_key_file }} {{ provisioner.hosts.host1.ssh_user }}@{{ provisioner.hosts.host1.ssh_host }} nc %h %p - HostName {{ hostvars[host].ansible_ssh_host }} + HostName {{ hostvars[host].ansible_host }} User root IdentityFile {{ inventory_dir }}/id_rsa StrictHostKeyChecking no diff --git a/playbooks/tester/api/pre.yml b/playbooks/tester/api/pre.yml index 30df8573a..171f73d87 100644 --- a/playbooks/tester/api/pre.yml +++ b/playbooks/tester/api/pre.yml @@ -1,4 +1,11 @@ --- +- name: Set path for tests + hosts: controller + gather_facts: yes + tasks: + - name: Set tests path + set_fact: tests_path="{{ tester.component.dir }}" + - name: Run pre tasks hosts: controller gather_facts: yes diff --git a/playbooks/tester/coverage/activate.yml b/playbooks/tester/coverage/activate.yml index a13915041..65deedff9 100644 --- a/playbooks/tester/coverage/activate.yml +++ b/playbooks/tester/coverage/activate.yml @@ -10,9 +10,9 @@ - file: path=/tmp/coverage-data state=touch mode="u=rwx,g=rwx,o=rwx" - template: - owner: "{{ hostvars[inventory_hostname].ansible_ssh_user }}" - group: "{{ hostvars[inventory_hostname].ansible_ssh_user }}" - dest: "/home/{{ hostvars[inventory_hostname].ansible_ssh_user }}/.coveragerc" + owner: "{{ hostvars[inventory_hostname].ansible_user }}" + group: "{{ hostvars[inventory_hostname].ansible_user }}" + dest: "/home/{{ hostvars[inventory_hostname].ansible_user }}/.coveragerc" src: ./templates/my.coveragerc.j2 - template: diff --git a/playbooks/tester/coverage/generate-report.yml b/playbooks/tester/coverage/generate-report.yml index b7c3ced21..f25e8f143 100644 --- a/playbooks/tester/coverage/generate-report.yml +++ b/playbooks/tester/coverage/generate-report.yml @@ -11,7 +11,7 @@ - tar - name: generate coverage report - shell: "coverage html --rcfile=/home/{{ hostvars[inventory_hostname].ansible_ssh_user }}/.coveragerc" + shell: "coverage html --rcfile=/home/{{ hostvars[inventory_hostname].ansible_user }}/.coveragerc" - name: pack coverage report shell: tar czf /tmp/coverage_html_report.tar.gzip /tmp/coverage_html_report diff --git a/playbooks/tester/coverage/templates/sitecustomize.py.j2 b/playbooks/tester/coverage/templates/sitecustomize.py.j2 index 942742416..e387192ff 100644 --- a/playbooks/tester/coverage/templates/sitecustomize.py.j2 +++ b/playbooks/tester/coverage/templates/sitecustomize.py.j2 @@ -1,6 +1,6 @@ import os import coverage -os.environ['COVERAGE_PROCESS_START']= "/home/{{ hostvars[inventory_hostname].ansible_ssh_user}}/.coveragerc" +os.environ['COVERAGE_PROCESS_START']= "/home/{{ hostvars[inventory_hostname].ansible_user}}/.coveragerc" os.environ['COVERAGE_FILE'] = "/tmp/coverage-data" coverage.process_startup() diff --git a/playbooks/tester/integration/common/upload_image.yml b/playbooks/tester/integration/common/upload_image.yml index bc6a353a7..21e8cd55d 100644 --- a/playbooks/tester/integration/common/upload_image.yml +++ b/playbooks/tester/integration/common/upload_image.yml @@ -4,9 +4,9 @@ gather_facts: no sudo: no vars: - - demo_username: demo - - demo_password: "{{ hostvars[controller_name].demo_password | default('redhat') }}" - - demo_tenant_name: demo + - demo_username: "{{ tester.accounts[0].username | default('demo') }}" + - demo_password: "{{ tester.accounts[0].password | default('redhat') }}" + - demo_tenant_name: "{{ tester.accounts[0].tenant_name | default('demo') }}" - controller_name: "{{ provisioner.nodes.controller.name }}" - controller_ip: "{{ hostvars[controller_name].ansible_default_ipv4.address }}" tasks: diff --git a/playbooks/tester/integration/horizon/pre.yml b/playbooks/tester/integration/horizon/pre.yml index d235c1967..9b5cae8bf 100644 --- a/playbooks/tester/integration/horizon/pre.yml +++ b/playbooks/tester/integration/horizon/pre.yml @@ -1,14 +1,4 @@ --- -- name: Prepare the environment (users and tenant) - hosts: controller - sudo: no - gather_facts: yes - vars: - controller_auth_url: "http://{{ ansible_default_ipv4.address }}:35357/v2.0/" - admin_password: "{{ admin_password | default('redhat') }}" - roles: - - openstack/create_users - - include: ../common/demo_tenant.yml - include: ../common/upload_image.yml @@ -19,7 +9,7 @@ vars: horizon_hosts_conf: /etc/httpd/conf.d/15-horizon_vhost.conf tasks: - - lineinfile: dest={{ horizon_hosts_conf }} insertafter="ServerAlias" line=" ServerAlias {{ ansible_ssh_host }}" state=present + - lineinfile: dest={{ horizon_hosts_conf }} insertafter="ServerAlias" line=" ServerAlias {{ ansible_host }}" state=present - service: name=httpd state=restarted - name: Get the list of avaialble services @@ -73,7 +63,8 @@ controller_name: "{{ provisioner.nodes.controller.name }}" horizon_tests: admin_password: "{{ hostvars[controller_name].admin_password | default('redhat') }}" - demo_password: "{{ hostvars[controller_name].demo_password | default('redhat') }}" + demo_password: "{{ tester.accounts[0].password | default('redhat') }}" + demo_username: "{{ tester.accounts[0].username | default('demo') }}" tmp_controller_host: "{{ hostvars[controller_name].ansible_default_ipv4.address }}" services_status: enabled_services: "{{ hostvars[controller_name].integration_enabled_services }}" @@ -92,6 +83,7 @@ option={{ item.key }} value={{ item.value }} with_items: + - { section: 'identity', key: 'username', value: "{{ horizon_tests.demo_username }}"} - { section: 'identity', key: 'password', value: "{{ horizon_tests.demo_password }}"} - { section: 'identity', key: 'admin_password', value: "{{ horizon_tests.admin_password }}"} - { section: 'identity', key: 'rh_portal_login', value: "{{ tester.integration.subscription.username }}" } diff --git a/playbooks/tester/integration/horizon/run.yml b/playbooks/tester/integration/horizon/run.yml index 614142b37..2cc13433a 100644 --- a/playbooks/tester/integration/horizon/run.yml +++ b/playbooks/tester/integration/horizon/run.yml @@ -11,7 +11,9 @@ BROWSER_NAME: "{{ lookup('env', 'BROWSER_NAME') }}" BROWSER_VERSION: "{{ lookup('env', 'BROWSER_VERSION') }}" BROWSER_PLATFORM: "{{ lookup('env', 'BROWSER_PLATFORM') }}" - shell: source ~/{{ tester.venv_dir }}/bin/activate && nosetests -v -a "{{ tester.integration.tests_tag }}" --with-xunit --xunit-file=horizon.xml openstack_dashboard/test/integration_tests/tests chdir=~/{{ tester.dir }} - ignore_errors: True + shell: | + [ -d ~/{{ tester.venv_dir }} ] && source ~/{{ tester.venv_dir }}/bin/activate + nosetests -v -a "{{ tester.integration.tests_tag }}" --with-xunit --xunit-file=horizon.xml openstack_dashboard/test/integration_tests/tests chdir=~/{{ tester.dir }} + ignore_errors: true async: 21600 poll: 30 \ No newline at end of file diff --git a/playbooks/tester/integration/pre.yml b/playbooks/tester/integration/pre.yml index f09b1ff7a..520f81023 100644 --- a/playbooks/tester/integration/pre.yml +++ b/playbooks/tester/integration/pre.yml @@ -20,6 +20,12 @@ - yum: name={{ item }} state=present with_items: tester.packages + # requirements of Ansible git modules + - yum: name=git state=present + + - yum: name=python-virtualenv state=present + when: "tester.pip_packages is defined and tester.pip_packages|length > 0" + - name: Prepare repository with tests hosts: tester sudo: no @@ -40,5 +46,16 @@ - name: Install pip test requirements pip: name={{ item }} virtualenv=~/{{ tester.venv_dir }} virtualenv_site_packages=yes with_items: tester.pip_packages + when: "tester.pip_packages is defined and tester.pip_packages|length > 0" + +- name: Prepare the environment (users and tenant) + hosts: controller + sudo: no + gather_facts: yes + vars: + controller_auth_url: "http://{{ ansible_default_ipv4.address }}:35357/v2.0/" + admin_password: "{{ hostvars[provisioner.nodes.controller.name].admin_password | default('redhat') }}" + roles: + - openstack/create_users - include: "{{ tester.component }}/pre.yml" diff --git a/playbooks/tester/jenkins/builders/run.yml b/playbooks/tester/jenkins/builders/run.yml index db29a69fd..d6c85625c 100644 --- a/playbooks/tester/jenkins/builders/run.yml +++ b/playbooks/tester/jenkins/builders/run.yml @@ -21,7 +21,7 @@ - name: Set the slave with the ansible playbook register: setup_slave - ignore_errors: True + ignore_errors: true shell: > ANSIBLE_ROLES_PATH=`pwd`/roles ANSIBLE_SSH_ARGS="" diff --git a/playbooks/tester/jenkins/builders/test.yml b/playbooks/tester/jenkins/builders/test.yml index c43494527..6af78bae2 100644 --- a/playbooks/tester/jenkins/builders/test.yml +++ b/playbooks/tester/jenkins/builders/test.yml @@ -2,7 +2,7 @@ - name: test created slave hosts: openstack_nodes vars: - - ansible_ssh_user: "rhos-ci" + - ansible_user: "rhos-ci" tasks: - set_fact: return_errors: [] diff --git a/playbooks/tester/rally/post.yml b/playbooks/tester/rally/post.yml index d437fa0d7..fd535fc03 100644 --- a/playbooks/tester/rally/post.yml +++ b/playbooks/tester/rally/post.yml @@ -45,14 +45,14 @@ args: creates: "{{ tester.rally.outputdir }}/sla.txt" # register: sla_check - ignore_errors: True + ignore_errors: true - name: SLA Check JSON shell: "{{ tester.rally.path }}/bin/rally task sla_check --json > {{ tester.rally.outputdir }}/sla.json" args: creates: "{{ tester.rally.outputdir }}/sla.json" # register: sla_check - ignore_errors: True + ignore_errors: true # These need to be archived by Jenkins Somehow diff --git a/playbooks/tester/rally/pre.yml b/playbooks/tester/rally/pre.yml index 69df9d2ef..10dd6953f 100644 --- a/playbooks/tester/rally/pre.yml +++ b/playbooks/tester/rally/pre.yml @@ -52,7 +52,7 @@ shell: "source /root/keystonerc_admin && nova flavor-create m1.nano 42 64 0 1" sudo: yes # ignore errors if flavor already created - ignore_errors: True + ignore_errors: true - name: Create Glance Image glance_image: diff --git a/playbooks/tester/rally/run.yml b/playbooks/tester/rally/run.yml index 93d98d26a..9afeb2e37 100644 --- a/playbooks/tester/rally/run.yml +++ b/playbooks/tester/rally/run.yml @@ -36,7 +36,7 @@ - name: Create Rally deployment shell: "source {{ tester.rally.dir }}/keystonerc_admin && {{ tester.rally.path }}/bin/rally deployment create --fromenv --name {{ tester.rally.deployment }} | awk '/{{ tester.rally.deployment }}/ {print $2}'" register: rally_deployment_uuid - ignore_errors: True + ignore_errors: true - debug: var=rally_deployment_uuid diff --git a/playbooks/tester/tempest/run.yml b/playbooks/tester/tempest/run.yml index 9c5cd6383..39f147dac 100644 --- a/playbooks/tester/tempest/run.yml +++ b/playbooks/tester/tempest/run.yml @@ -36,5 +36,5 @@ - name: run tempest shell: "{{ tester.dir }}/with_venv ./tools/run-tests.sh {{ tester.tempest.testr_args|default('') }} {{ tester.tempest.test_regex }} {{ skipfile }}" - ignore_errors: True + ignore_errors: true when: tester.tempest.test_regex is defined or (tester.tempest.whitelist is defined and tester.tempest.whitelist) diff --git a/playbooks/tester/templates/hosts_slave.conf.j2 b/playbooks/tester/templates/hosts_slave.conf.j2 index 36f93d3de..02720397b 100644 --- a/playbooks/tester/templates/hosts_slave.conf.j2 +++ b/playbooks/tester/templates/hosts_slave.conf.j2 @@ -1,9 +1,9 @@ {% for host in groups.openstack_nodes %} -{{ hostvars[host].ansible_ssh_host }} ansible_ssh_user=fedora +{{ hostvars[host].ansible_host }} ansible_user=fedora {% endfor %} [slave] {% for host in groups.openstack_nodes %} -{{ hostvars[host].ansible_ssh_host }} ansible_ssh_user=fedora +{{ hostvars[host].ansible_host }} ansible_user=fedora {% endfor %} diff --git a/playbooks/tester/unittest/pre.yml b/playbooks/tester/unittest/pre.yml index a5f33b99c..f6ea02b82 100644 --- a/playbooks/tester/unittest/pre.yml +++ b/playbooks/tester/unittest/pre.yml @@ -1,38 +1,6 @@ --- -- name: Test dependencies - sudo: yes - vars: - test_cfg: "{{ test_env }}" +- name: Preparation tasks hosts: controller - tasks: - - name: Install test rpm dependencies - yum: pkg={{ item }} state=latest - with_items: test_cfg.setup.install - when: - test_cfg.setup | default(false) and test_cfg.setup.install | default(false) - - - name: Remove unwanted rpms - yum: pkg={{ item }} state=absent - with_items: test_cfg.setup.remove - when: - test_cfg.setup | default(false) and test_cfg.setup.remove | default(false) - -- name: Install packages to convert and publish tests results - sudo: yes - hosts: controller - tasks: - - name: Install packages to convert subunit stream into junitxml - yum: name={{ item }} state=present - with_items: - - subunit-filters - - python-junitxml - -- name: print test configuration - hosts: controller - tasks: - - name: print component path - debug: var={{ component_path }} - - - name: print test configuration - debug: var=test_env - register: env + gather_facts: yes + roles: + - component-test/pre \ No newline at end of file diff --git a/plugins/callbacks/human_log.py b/plugins/callbacks/human_log.py index daad0f84b..54e090d95 100644 --- a/plugins/callbacks/human_log.py +++ b/plugins/callbacks/human_log.py @@ -22,12 +22,19 @@ except ImportError: import json +try: + from ansible.plugins.callback import CallbackBase + ANSIBLE2 = True +except ImportError: + ANSIBLE2 = False + + # Fields to reformat output for FIELDS = ['cmd', 'command', 'start', 'end', 'delta', 'msg', 'stdout', 'stderr', 'results'] -class CallbackModule(object): +class CallbackModule(CallbackBase if ANSIBLE2 else object): def human_log(self, data): if type(data) == dict: for field in FIELDS: diff --git a/plugins/callbacks/timing.py b/plugins/callbacks/timing.py index 70563759e..3ab35a472 100644 --- a/plugins/callbacks/timing.py +++ b/plugins/callbacks/timing.py @@ -1,7 +1,13 @@ from datetime import datetime +try: + from ansible.plugins.callback import CallbackBase + ANSIBLE2 = True +except ImportError: + ANSIBLE2 = False -class CallbackModule(object): + +class CallbackModule(CallbackBase): __color = '\033[01;30m' __endcolor = '\033[00m' diff --git a/plugins/hacking/log_stdstream.py b/plugins/hacking/log_stdstream.py index 77bc678f6..b01757915 100644 --- a/plugins/hacking/log_stdstream.py +++ b/plugins/hacking/log_stdstream.py @@ -6,6 +6,12 @@ import codecs import locale +try: + from ansible.plugins.callback import CallbackBase + ANSIBLE2 = True +except ImportError: + ANSIBLE2 = False + TIME_FORMAT = "%b %d %Y %H:%M:%S" MARK_FORMAT = "%(now)s ======== MARK ========\n" MSG_FORMAT = "%(now)s - %(category)s - %(data)s\n\n" @@ -55,7 +61,7 @@ def log(host, category, data): fd.write(RESULTS_END) -class CallbackModule(object): +class CallbackModule(CallbackBase if ANSIBLE2 else object): """ logs playbook results, per host, in /tmp/ansible/stdstream_logs """ diff --git a/roles/common-handlers/handlers/main.yml b/roles/common-handlers/handlers/main.yml index 4ac075f51..8d8c74262 100644 --- a/roles/common-handlers/handlers/main.yml +++ b/roles/common-handlers/handlers/main.yml @@ -1,19 +1,23 @@ --- - name: reboot sudo: no - local_action: - wait_for_ssh reboot_first=true host={{ hostvars[inventory_hostname].ansible_ssh_host }} user={{ hostvars[inventory_hostname].ansible_ssh_user }} key={{ hostvars[inventory_hostname].ansible_ssh_private_key_file }} + delegate_to: localhost + wait_for_ssh: + reboot_first: true + host: "{{ hostvars[inventory_hostname].ansible_host }}" + user: "{{ hostvars[inventory_hostname].ansible_user }}" + key: "{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}" - name: reboot_rdo_manager sudo: no - local_action: - wait_for_ssh - reboot_first=true - ssh_opts="-F ../../../ssh.config.ansible" - host="{{ ansible_ssh_host }}" - user="root" - key="{{ ansible_ssh_private_key_file }}" - sudo=false + delegate_to: localhost + wait_for_ssh: + reboot_first: true + ssh_opts: "-F ../../../ssh.config.ansible" + host: "{{ ansible_host }}" + user: "root" + key: "{{ ansible_ssh_private_key_file }}" + sudo: false notify: - Check instance uptime diff --git a/roles/component-test/pre/tasks/packages.yml b/roles/component-test/pre/tasks/packages.yml index 650c541c5..c21d2c7bf 100644 --- a/roles/component-test/pre/tasks/packages.yml +++ b/roles/component-test/pre/tasks/packages.yml @@ -1,4 +1,21 @@ --- +- name: disable any repos specified + sudo: yes + shell: yum-config-manager --disable {{ item }} + with_items: test_cfg.setup.disable_repos + when: + test_cfg.setup | default(false) and test_cfg.setup.disable_repos | default(false) + +- name: enable any additional repos to be used + sudo: yes + shell: yum-config-manager --enable {{ item }} + with_items: test_cfg.setup.enable_repos + when: + test_cfg.setup | default(false) and test_cfg.setup.enable_repos | default(false) + +- name: grab short reposlist + shell: yum repolist all + - name: install test dependencies rpm needed to run test sudo: yes yum: pkg={{ item }} state=latest diff --git a/roles/component-test/pre/tasks/pre.yml b/roles/component-test/pre/tasks/pre.yml index 9b5a737f3..24fed9be1 100644 --- a/roles/component-test/pre/tasks/pre.yml +++ b/roles/component-test/pre/tasks/pre.yml @@ -1,12 +1,12 @@ --- - name: compute the directory basename - set_fact: component_basename={{ tester.component.dir.split('/')|last }} + set_fact: component_basename={{ tests_path.split('/')|last }} - name: find the test dependencies file used for the test-run - set_fact: test_deps_file="{{ tester.component.dir + '/' + tester.component.config_file }}" #" + set_fact: test_deps_file="{{ tests_path + '/' + tester.component.config_file }}" #" - name: load config - include_vars: "{{test_deps_file}}" + include_vars: "{{ test_deps_file }}" register: result #TODO(abregman): add major and minor version in distro settings @@ -16,14 +16,14 @@ - name: set full release set_fact: full_release="{{ ansible_distribution + '-' + ansible_distribution_version }}" -- name: set test_env +- name: Set test_env set_fact: test_env="{{ test_config.virt[item]|default(omit) }}" with_items: - "{{ major_release }}" - "{{ full_release }}" - name: rsync tests dir to tester - synchronize: src="{{ tester.component.dir }}" dest="{{ ansible_env.HOME }}/" #" + synchronize: src="{{ tests_path }}" dest="{{ ansible_env.HOME }}/" #" register: result - name: print result @@ -37,8 +37,8 @@ sudo: yes command: "rhos-release {{ product.version.major }} {{ product.repo.rhos_release.extra_args|join(' ') }}" -- name: print tester component dir - debug: var=tester.component.dir +- name: Print component tests path + debug: var=tests_path - name: print HOME dir debug: var=ansible_env.HOME diff --git a/roles/delorean/tasks/copy-rpm.yml b/roles/delorean/tasks/copy-rpm.yml index f5994fb3a..9b3c6edc1 100644 --- a/roles/delorean/tasks/copy-rpm.yml +++ b/roles/delorean/tasks/copy-rpm.yml @@ -1,7 +1,26 @@ - name: Create a directory to hold the delorean rpms - file: path={{ ansible_env.HOME }}/rpms state=directory + file: + path: "{{ ansible_env.HOME }}/delorean_rpms" + state: directory -- name: Copy and rename the generated rpms - shell: > - cp {{ ansible_env.HOME }}/delorean/repos/*/*/*/*.rpm {{ ansible_env.HOME }}/rpms/; - rm -rf {{ ansible_env.HOME }}/delorean; +- name: Copy the generated rpms + shell: | + find {{ ansible_env.HOME }}/delorean/data/repos -type f -name '*.rpm' -print0| xargs -0 cp -t {{ ansible_env.HOME }}/delorean_rpms/ + rm -rf {{ ansible_env.HOME }}/delorean + +- name: Run createrepo on generated rpms + sudo: yes + shell: "createrepo delorean_rpms" + args: + chdir: "{{ ansible_env.HOME }}" + +- name: Compress the repo before fetching + shell: "tar czf delorean_rpms.tar.gz delorean_rpms" + args: + chdir: "{{ ansible_env.HOME }}" + +- name: Fetch the repo to the slave + fetch: + flat: yes + src: "{{ ansible_env.HOME }}/delorean_rpms.tar.gz" + dest: "{{ base_dir }}/delorean_rpms.tar.gz" diff --git a/roles/delorean/tasks/install.yml b/roles/delorean/tasks/install.yml index 8c1884e72..74cfa76a7 100644 --- a/roles/delorean/tasks/install.yml +++ b/roles/delorean/tasks/install.yml @@ -1,5 +1,5 @@ - name: Ensure delorean package dependencies - yum: name=mock,python-virtualenv state=installed + yum: name=createrepo,mock,python-virtualenv,rpm-build state=installed sudo: yes - name: Create mock group @@ -8,7 +8,7 @@ - name: Add user to mock group sudo: yes - user: name=rhos-ci groups=mock + user: name={{ ansible_user }} groups=mock - name: Create virtualenv for Delorean command: virtualenv {{ ansible_env.HOME }}/delorean-venv creates='{{ ansible_env.HOME }}/delorean-venv' @@ -28,8 +28,3 @@ pip: name: tox virtualenv: '{{ ansible_env.HOME }}/delorean-venv' - -- name: Apply temporary fix - shell: 'git fetch https://review.gerrithub.io/openstack-packages/delorean refs/changes/75/255375/2 && git checkout FETCH_HEAD' - args: - chdir: "{{ ansible_env.HOME }}/delorean" diff --git a/roles/delorean/templates/delorean_rpms.j2 b/roles/delorean/templates/delorean_rpms.j2 index 1282a30e5..e99f028f0 100644 --- a/roles/delorean/templates/delorean_rpms.j2 +++ b/roles/delorean/templates/delorean_rpms.j2 @@ -1,6 +1,6 @@ [delorean-rpms] name=Delorean rpms -baseurl=file:///home/{{ ansible_ssh_user }}/delorean_rpms +baseurl=file://{{ ansible_env.HOME }}/delorean_rpms enabled=1 gpgcheck=0 priority=1 diff --git a/roles/delorean_rpms/tasks/main.yml b/roles/delorean_rpms/tasks/main.yml index 0e5cd8e71..b758d0d2a 100644 --- a/roles/delorean_rpms/tasks/main.yml +++ b/roles/delorean_rpms/tasks/main.yml @@ -9,22 +9,24 @@ command: "rhos-release {{ product.full_version|int }} {{ product.repo.rhos_release.extra_args|join(' ') }}" when: product.rpm is defined and product.rpm -- name: Install createrepo - sudo: yes - yum: name=createrepo state=present - -- name: Create repo folder - file: path=/home/{{ ansible_ssh_user }}/delorean state=directory +- name: Unpack the repo + unarchive: + src: "{{ base_dir }}/delorean_rpms.tar.gz" + dest: "{{ ansible_env.HOME }}" -- name: copy the generated rpms - copy: src={{ item }} dest=/home/{{ ansible_ssh_user }}/delorean_rpms - with_fileglob: - - "{{ ansible_env.HOME }}/rpms/*.rpm" +- name: Lower repo priorities from one + sudo: yes + shell: > + for file in /etc/yum.repos.d/*.repo; do + sed -i 's/priority=1/priority=2/' $file; + done - name: Setup repository configuration sudo: yes - template: "src={{ lookup('env', 'PWD') }}/roles/delorean/templates/delorean_rpms.j2 dest=/etc/yum.repos.d/delorean_rpms.repo" + template: + src: "{{ base_dir }}/khaleesi/roles/delorean/templates/delorean_rpms.j2" + dest: "/etc/yum.repos.d/delorean_rpms.repo" -- name: Run createrepo to setup repo for patched rpm +- name: print out current repo config sudo: yes - shell: "createrepo /home/{{ ansible_ssh_user }}/delorean_rpms" + command: yum -d 7 repolist diff --git a/roles/depends-on/files/depends_on.py b/roles/depends-on/files/depends_on.py index 82d5eb5f6..94c58c972 100755 --- a/roles/depends-on/files/depends_on.py +++ b/roles/depends-on/files/depends_on.py @@ -28,10 +28,15 @@ import subprocess import sys import urlparse +import yaml +from argparse import ArgumentParser +from glob import glob +from jinja2 import Template # we ignore any other host reference ALLOWED_HOSTS = ["", "codeng", "review.gerrithub.io:29418"] + def parse_commit_msg(msg=None): """Look for dependency links in the commit message.""" if msg is None: @@ -59,6 +64,7 @@ def parse_commit_msg(msg=None): tag.group(1), tag.group(2)) return tags + def get_refspec_urls(tags): """Parsing the necessary url info for the referenced changes""" def_host = os.getenv("GERRIT_HOST", 'review.gerrithub.io') @@ -79,10 +85,16 @@ def get_refspec_urls(tags): output = subprocess.check_output(shlex.split(cmd)).splitlines()[0] # parse it to json data = json.loads(output) + if "currentPatchSet" not in data: logging.warning("failed to fetch data from gerrit for " "Change-Id: %s", change) continue + if data.get("status") not in ["NEW"]: + logging.warning("Patch already merged " + "Change-Id: %s", change) + continue + parsed_url = urlparse.urlparse(data["url"]) # gerrit does not provide the repo URL in the reply, we have to # construct it from the clues @@ -94,42 +106,144 @@ def get_refspec_urls(tags): # get the repo name from the last part after the slash repo_folder = data["project"].split("/")[-1] repo_ref = data["currentPatchSet"]["ref"] - targets.append([repo_folder, repo_url, repo_ref]) + repo_branch = data["branch"] + + targets.append([repo_folder, repo_url, repo_ref, repo_branch]) logging.debug("data query result for Change-Id: %s, server: %s:%s, " - "folder %s, url: %s, ref: %s", - change, host, port, repo_folder, repo_url, repo_ref) + "folder %s, url: %s, ref: %s, branch:%s", + change, host, port, repo_folder, + repo_url, repo_ref, repo_branch) return targets -def checkout_changes(targets, basedir="."): - """Fetch and checkout the changes for the target repos""" + +def update_repo(project, url, ref, branch, basedir): checkout_cmd = "git checkout FETCH_HEAD" - for folder, url, ref in targets: - folder_path = os.path.join(basedir, folder) + try: + # I didn't find the settings/rpm for the project + # so I'm going to try to fetch the changes if the + # directory for that project exists in the tree + folder_path = os.path.join(basedir, project) logging.debug("changing working dir to %s", folder_path) os.chdir(folder_path) fetch_cmd = "git fetch %s %s" % (url, ref) logging.debug("fetch command: %s", fetch_cmd) subprocess.Popen(shlex.split(fetch_cmd)).wait() subprocess.Popen(shlex.split(checkout_cmd)).wait() + except OSError: + logging.warning( + "Directory not found for {} skipping".format(project) + ) + + +def update_rpm(project, ref, branch, basedir, ksgen, filenumber): + output_dict = {} + # doing a late evaluation on ksgen_settings existence + # because it might not be needed + if not ksgen: + logging.error( + "ksgen_settings not found" + ) + sys.exit(1) + + rpm_instructions = glob("{}/khaleesi/settings/rpm/*{}.yml".format(basedir, project)) + if not rpm_instructions: + logging.warning( + "khaleesi/settings/rpm/*{}.yml not found in {}".format(project, basedir) + ) + return + + with open(rpm_instructions[0], "r") as fd: + # the replace here is important because !lookup is not + # valid jinja2 template and it will be used later + output_dict = yaml.load(fd.read().replace("!lookup", "")) + + # do the changes needed for this patch + output_dict["patch"]["gerrit"]["branch"] = branch + output_dict["patch"]["gerrit"]["refspec"] = ref + + # but the change still leaves two private urls + # like private.building.gerrit.url + # luckly those exist in ksgen_settings and using + # jinja2 templates here will fill those values + t = Template(yaml.safe_dump(output_dict, default_flow_style=False)) -def test_module(): + with open("{}/khaleesi/extra_settings_{}.yml".format(basedir, filenumber), "w") as fd: + fd.write(t.render(ksgen)) + fd.write("\n") # for extra niceness + logging.warning( + "wrote {}/khaleesi/extra_settings_{}.yml for {}".format(basedir, filenumber, project) + ) + + +def generate_config(targets, basedir=".", update=None, ksgenfile=None): + """ + This works in two ways + + if we know how to build the package (ie. it exists on settings/rpms/) + we generate one extra_settings_.yml for each of the packages + + if we do not know how to build it but there's a directory with that name + under basedir we will update that to the ref specified and check that out + + """ + if not ksgenfile: + ksgenfile = "{}/khaleesi/ksgen_settings.yml".format(basedir) + + try: + with open(ksgenfile, "r") as fd: + ksgen = yaml.load(fd) + except IOError: + ksgen = None + + filenumber = 1 + for project, url, ref, branch in targets: + if update == "repo": + update_repo(project, url, ref, branch, basedir) + + elif update == "rpm": + update_rpm(project, ref, branch, basedir, ksgen, filenumber) + filenumber += 1 + + +def test_module(basedir, update, ksgenfile): """Test with some known working Change-Ids""" test_msg = ("This is a test commit message.\n\n" "Depends-On: If4cea049\n" - "Depends-On: I1c3f14ba@codeng") + "Depends-On: Id0aef5ee6dcb@review.gerrithub.io:29418\n" + "Depends-On: I1c3f14ba@codeng\n" + "Depends-On: I62e3c43afd@codeng\n" + "Depends-On: I02c15311@codeng") test_tags = parse_commit_msg(base64.b64encode(test_msg)) test_targets = get_refspec_urls(test_tags) - checkout_changes(test_targets, "/tmp") + generate_config(test_targets, basedir, update, ksgenfile) + -def run(repo_dir): +def run(basedir, update, ksgenfile): + logging.warning( + "getting dependencies for {}".format(update) + ) run_tags = parse_commit_msg() run_targets = get_refspec_urls(run_tags) - checkout_changes(run_targets, repo_dir) + if run_targets: + generate_config(run_targets, basedir, update, ksgenfile) + else: + logging.warning("Nothing to do. Exiting") + if __name__ == "__main__": logging.basicConfig(level=logging.DEBUG) - if len(sys.argv) < 2: - print "Usage: %s " % (sys.argv[0]) - else: - run(sys.argv[1]) - #test_module() + ap = ArgumentParser("Generate changes to repos or rpm settings based on depends-on gerrit comments") + ap.add_argument('basedir', + default=".", + help="basedir to work from") + ap.add_argument('build', + default="repo", + choices=['repo', 'rpm'], + nargs='?', + help="What to build") + ap.add_argument('ksgen_settings', + nargs='?', + help="where to find the ksgen_settings.yml file") + args = ap.parse_args() + run(args.basedir, args.build, args.ksgen_settings) + # test_module(args.basedir, args.build, args.ksgen_settings) diff --git a/roles/depends-on/tasks/main.yml b/roles/depends-on/tasks/main.yml index d2e2c70d4..52191a72c 100644 --- a/roles/depends-on/tasks/main.yml +++ b/roles/depends-on/tasks/main.yml @@ -8,6 +8,8 @@ ignore_errors: yes register: is_internal +- debug: msg="getting dependencies for {{ update }}" + - name: search and fetch dependent changes - script: depends_on.py {{ lookup('env', 'WORKSPACE') }} + script: depends_on.py {{ lookup('env', 'WORKSPACE') }} {{ update }} when: ("\"\" != \"{{ lookup('env', 'GERRIT_CHANGE_COMMIT_MESSAGE') }}\"" and {{ is_internal.rc }} == 0) diff --git a/roles/libvirt/ssh_config/templates/ssh_config.j2 b/roles/libvirt/ssh_config/templates/ssh_config.j2 index 7edd56d14..ab50734e2 100644 --- a/roles/libvirt/ssh_config/templates/ssh_config.j2 +++ b/roles/libvirt/ssh_config/templates/ssh_config.j2 @@ -1,6 +1,6 @@ -Host libvirt_host +Host libvirt_host User root - HostName {{ hostvars['libvirt_host'].ansible_ssh_host }} + HostName {{ hostvars['libvirt_host'].ansible_host }} ProxyCommand none IdentityFile {{ key_file }} BatchMode yes diff --git a/roles/libvirt/ssh_config/templates/ssh_config_host.j2 b/roles/libvirt/ssh_config/templates/ssh_config_host.j2 index f6d6f9df7..083de9e40 100644 --- a/roles/libvirt/ssh_config/templates/ssh_config_host.j2 +++ b/roles/libvirt/ssh_config/templates/ssh_config_host.j2 @@ -1,7 +1,7 @@ Host {{ item.value.name }} ServerAliveInterval 60 TCPKeepAlive yes - ProxyCommand ssh -o ConnectTimeout=30 -A {{ hostvars["libvirt_host"].ansible_ssh_user}}@{{ hostvars["libvirt_host"].ansible_ssh_host }} nc --wait 30 %h.{{ provisioner.network.nic.net_1.domain }} %p + ProxyCommand ssh -o ConnectTimeout=30 -A {{ hostvars["libvirt_host"].ansible_user}}@{{ hostvars["libvirt_host"].ansible_host }} nc --wait 30 %h.{{ provisioner.network.nic.net_1.domain }} %p ControlMaster auto ControlPath ~/.ssh/mux-%r@%h:%p ControlPersist 8h diff --git a/roles/linux/rhel/rhos/meta/main.yml b/roles/linux/rhel/rhos/meta/main.yml deleted file mode 100644 index f18391d5c..000000000 --- a/roles/linux/rhel/rhos/meta/main.yml +++ /dev/null @@ -1,4 +0,0 @@ ---- -dependencies: -- { role: linux } -- { role: common } \ No newline at end of file diff --git a/roles/linux/rhel/rhos/tasks/main.yml b/roles/linux/rhel/rhos/tasks/main.yml deleted file mode 100644 index 6275befc1..000000000 --- a/roles/linux/rhel/rhos/tasks/main.yml +++ /dev/null @@ -1,167 +0,0 @@ ---- -- name: Create the RHOS Release Repository - template: src=rhos-release.repo.j2 dest=/etc/yum.repos.d/rhos-release.repo - when: product.repo_type in ['poodle', 'puddle'] - -- name: install rhos-release - yum: name=rhos-release state=latest - when: product.repo_type in ['poodle', 'puddle'] - -- name: Execute rhos-release {{ product.version.major }} - command: "rhos-release {{ product.version.major }}" - when: (product.repo_type in ['puddle'] and installer.name not in ['instack', 'rdo-manager']) - -- name: Execute rhos-release for OSP-Director {{ product.full_version }} - command: "rhos-release {{ product.full_version }}" - when: (product.repo_type in ['puddle'] and installer.name in ['instack', 'rdo-manager']) - -#hack for the multiple products involved in setting up rdo-manager -- name: Create the RHOS Release Repository for rdo-manager - template: src=rhos-release.repo.j2 dest=/etc/yum.repos.d/rhos-release.repo - when: product_override_version is defined and product.repo_type_override == 'rhos-release' - -- name: install rhos-release for rdo-manager - yum: name=rhos-release state=latest - when: product_override_version is defined and product.repo_type_override == 'rhos-release' - -- name: Execute rhos-release for rdo-manager {{ product_override_version|int }} - command: "rhos-release {{ product_override_version|int }}" - when: product_override_version is defined and product.repo_type_override == 'rhos-release' - -- name: Execute rhos-release {{ product.version.major }}{{ installer_host_repo | default('')}} - command: "rhos-release {{ product.version.major }}{{ installer_host_repo | default('')}}" - when: installer is defined and installer.name == "foreman" and installer_host_repo | default('') != '' - -- name: Change server location for repos in rhos-release - replace: - dest=/etc/yum.repos.d/rhos-release-{{ product.version.major }}{{ installer_host_repo | default('')}}-rhel-{{ ansible_distribution_version|string}}.repo - regexp={{ location.defaultrepo_string }} - replace={{ location.map[user_location] }} - when: user_location is defined - -- name: Change puddle version for repos in rhos-release - replace: - dest=/etc/yum.repos.d/rhos-release-{{ product.version.major }}{{ installer_host_repo | default('')}}-rhel-{{ ansible_distribution_version|string}}.repo - regexp=/latest/RH{{ ansible_distribution_major_version|string }} - replace=/{{ product.repo.puddle_pin_version }}/RH{{ ansible_distribution_major_version|string }} - when: (product.repo.puddle_pin_version is defined and product.repo.puddle_pin_version != 'latest' and product.repo_type == 'puddle') - -- name: Change Foreman version for repos in rhos-release - replace: - dest=/etc/yum.repos.d/rhos-release-{{ product.version.major }}{{ installer_host_repo | default('')}}-rhel-{{ ansible_distribution_version|string }}.repo - regexp=/Foreman/latest/ - replace=/Foreman/{{ product.repo.foreman_pin_version }}/ - when: (product.repo.foreman_pin_version is defined and product.repo.foreman_pin_version != 'latest') - -- name: Enable RHSM - shell: > - rhos-release -x {{ product.version.major }}{{ installer_host_repo | default('')}}; - rm -Rf /etc/yum.repos.d/rhos-release.repo; - subscription-manager register --username {{ distro.rhel.subscription.username }} --password {{ distro.rhel.subscription.password }}; - subscription-manager subscribe --pool {{ distro.rhel.subscription.pool }}; - subscription-manager repos --disable=*; - when: (product.repo_type == 'rhsm' and ansible_distribution_version|int == 7) - -- name: Enable RHSM yum repos - shell: > - subscription-manager repos --disable=*; - subscription-manager repos --enable=rhel-7-server-rpms; - subscription-manager repos --enable=rhel-7-server-optional-rpms; - subscription-manager repos --enable=rhel-7-server-extras-rpms; - subscription-manager repos --enable=rhel-7-server-openstack-{{ product.full_version }}-rpms; - yum-config-manager --setopt="rhel-7-server-openstack-{{ product.full_version }}-rpms.priority=1" --enable rhel-7-server-openstack-{{ product.full_version }}-rpms; - when: (product.repo_type == 'rhsm' and ansible_distribution_version|int == 7) - -- name: Enable RHSM for rdo-manager - shell: > - rm -Rf /etc/yum.repos.d/rhos-release.repo; - subscription-manager register --username {{ distro.rhel.subscription.username }} --password {{ distro.rhel.subscription.password }}; - subscription-manager subscribe --pool {{ distro.rhel.subscription.physical_pool }}; - subscription-manager repos --disable=*; - when: (product_repo_type_override is defined and product_repo_type_override == 'rhsm' and ansible_distribution_version|int == 7) - -- name: Enable RHSM yum repos for rdo-manager - shell: > - subscription-manager repos --disable=*; - subscription-manager repos --enable=rhel-7-server-rpms; - subscription-manager repos --enable=rhel-7-server-optional-rpms; - subscription-manager repos --enable=rhel-7-server-extras-rpms; - subscription-manager repos --enable=rhel-7-server-openstack-{{ product_override_version }}-rpms; - yum-config-manager --setopt="rhel-7-server-openstack-{{ product_override_version }}-rpms.priority=1" --enable rhel-7-server-openstack-{{ product_override_version }}-rpms; - when: (product_repo_type_override is defined and product_repo_type_override == 'rhsm' and ansible_distribution_version|int == 7) - - -# new advanced repos -- name: Create the RHOS Advanced repository - shell: "rhos-release -x {{ product.version.major }}; rhos-release {{ product.version.major }}a" - when: product.repo_type == 'advanced' - -# poodle repos -- name: Create the RHOS poodle repository - shell: "rhos-release -x {{ product.version.major }}{{ installer_host_repo | default('')}}; rhos-release -d {{ product.version.major }}{{ installer_host_repo | default('')}}" - when: (product.repo_type in ['poodle'] and installer is defined and installer.name not in ['instack', 'rdo-manager']) - -- name: Create the OSP-Director poodle repository - shell: "rhos-release -x {{ product.full_version }}{{ installer_host_repo | default('')}}; rhos-release -d {{ product.full_version }}{{ installer_host_repo | default('')}}" - when: (product.repo_type in ['poodle'] and installer is defined and installer.name in ['instack', 'rdo-manager']) - -- name: Create the RHOS Advanced poodle repository - shell: "rhos-release -x {{ product.full_version }}; rhos-release -d {{ product.full_version }}a" - when: product.repo_type == 'poodle_advanced' - -- name: Create the COPR repos required for component tests - template: src=component-test-copr-repo.j2 dest=/etc/yum.repos.d/component-test-copr.repo - when: (test.type.name is defined and (test.type.name == 'unit-test' or test.type.name == 'pep8-test') and ansible_distribution_version|int >= 6) - -- name: Change poodle version for repos in rhos-release - shell: "rhos-release -x {{ product.version.major }}; rhos-release {{ product.version.major }} -d -p {{ product.repo.poodle_pin_version }}" - when: (product.repo.poodle_pin_version is defined and product.repo.poodle_pin_version != 'latest|GA' and product.repo_type == 'poodle' and installer_host_repo | default('') == '') - -- name: Change poodle version for repos in rhos-release for OFI installer host - shell: "rhos-release -x {{ product.version.major }}{{ installer_host_repo | default('')}}; rhos-release {{ product.version.major }}{{ installer_host_repo | default('')}} -d -p {{ product.repo.installer_poodle_pin_version }}" - when: (product.repo.installer_poodle_pin_version is defined and product.repo.installer_poodle_pin_version != 'latest|GA' and product.repo_type == 'poodle' and installer is defined and installer.name == "foreman" and installer_host_repo | default('') != '') - -- name: Change poodle version for repos in rhos-release for GA -> Latest Poodle - shell: "rhos-release -x {{ product.version.major }}; rhos-release {{ product.version.major }} -p {{ product.repo.poodle_pin_version }}" - when: (product.repo.poodle_pin_version is defined and product.repo.poodle_pin_version == 'GA' and product.repo_type == 'poodle' and installer_host_repo | default('') == '') - -- name: Change poodle version for repos in rhos-release for OFI installer host and GA-> latest Poodle - shell: "rhos-release -x {{ product.version.major }}{{ installer_host_repo | default('')}}; rhos-release {{ product.version.major }}{{ installer_host_repo | default('')}} -d -p {{ product.repo.installer_poodle_pin_version }}" - when: (product.repo.installer_poodle_pin_version is defined and product.repo.installer_poodle_pin_version == 'latest|GA' and product.repo_type == 'poodle' and installer is defined and installer.name == "foreman" and installer_host_repo | default('') != '') - -# copr repos -- name: enable tripleo copr repository - shell: "sudo curl -o /etc/yum.repos.d/slagle-openstack-m.repo {{ product.repo.copr[ ansible_distribution ][distro.full_version] }}" - when: product.repo.copr is defined - register: rdo_repo_output - -- name: print rdo_repo_output - debug: var=rdo_repo_output.stdout - when: product.repo.copr is defined - -- name: ensure yum-utils - yum: name={{ item }} state=present - with_items: - - yum-utils - -- name: Disable default foreman puddle rhelosp repo when using poodle - shell: /usr/bin/yum-config-manager --disable 'rhelosp-*-OS-Foreman' - when: product.repo_type == 'poodle' - -# custom repos -- name: enable a custom repository - yum: name="{{ installer.custom_repo }}" - when: installer.custom_repo is defined - register: rdo_repo_output - -- name: print rdo_repo_output - debug: var=rdo_repo_output.stdout - when: installer.custom_repo is defined - -- name: Remove any rhel repo created by rdo-ci #used when both rdo and rhos are in play - file: path=/etc/yum.repos.d/rhel_ci.repo state=absent - notify: - - Yum clean all - -- name: List available yum repositories - command: yum -d 9 repolist diff --git a/roles/linux/rhel/rhos/templates/foreman-poodle.repo.j2 b/roles/linux/rhel/rhos/templates/foreman-poodle.repo.j2 deleted file mode 100644 index 35ef63351..000000000 --- a/roles/linux/rhel/rhos/templates/foreman-poodle.repo.j2 +++ /dev/null @@ -1,5 +0,0 @@ -[foreman-poodle] -name=foreman-poodle -baseurl={{ product.repo.foreman_poodle[ansible_distribution][ansible_distribution_version] }} -enabled=0 -gpgcheck=0 diff --git a/roles/linux/rhel/rhos/templates/rhos-advanced.repo.j2 b/roles/linux/rhel/rhos/templates/rhos-advanced.repo.j2 deleted file mode 100644 index abe9f5b78..000000000 --- a/roles/linux/rhel/rhos/templates/rhos-advanced.repo.j2 +++ /dev/null @@ -1,5 +0,0 @@ -[rhos-advanced] -name=rhos-advanced -baseurl={{ product.repo.advanced[ansible_distribution][ansible_distribution_version] }} -enabled=0 -gpgcheck=0 diff --git a/roles/linux/rhel/rhos/templates/rhos-poodle.repo.j2 b/roles/linux/rhel/rhos/templates/rhos-poodle.repo.j2 deleted file mode 100644 index 5e155485a..000000000 --- a/roles/linux/rhel/rhos/templates/rhos-poodle.repo.j2 +++ /dev/null @@ -1,5 +0,0 @@ -[rhos-poodle] -name=rhos-poodle -baseurl={{ product.repo.poodle[ansible_distribution][ansible_distribution_version] }} -enabled=0 -gpgcheck=0 diff --git a/roles/linux/rhel/rhos/templates/rhos-release.repo.j2 b/roles/linux/rhel/rhos/templates/rhos-release.repo.j2 deleted file mode 100644 index 2d98530fc..000000000 --- a/roles/linux/rhel/rhos/templates/rhos-release.repo.j2 +++ /dev/null @@ -1,6 +0,0 @@ -[rhos-release] -name=rhos-release -baseurl={{ product.rpmrepo[ansible_distribution] }} -enabled=1 -gpgcheck=0 - diff --git a/roles/openstack/create_users/tasks/main.yml b/roles/openstack/create_users/tasks/main.yml index a5983b9e5..ecae97999 100644 --- a/roles/openstack/create_users/tasks/main.yml +++ b/roles/openstack/create_users/tasks/main.yml @@ -26,3 +26,19 @@ endpoint: "{{ controller_auth_url }}" state: present with_items: tester.accounts + +# assign the roles to users +# TODO: Ansible 2, use skip_missing: yes +- keystone_user: + role: "{{ item.1 }}" + user: "{{ item.0.username }}" + password: "{{ item.0.password }}" + tenant: "{{ item.0.tenant_name }}" + login_user: admin + login_password: "{{ admin_password }}" + login_tenant_name: admin + endpoint: "{{ controller_auth_url }}" + state: present + with_subelements: + - tester.accounts + - roles diff --git a/roles/openstack/openstack-status/tasks/main.yml b/roles/openstack/openstack-status/tasks/main.yml index 36769253e..f1e205bfc 100644 --- a/roles/openstack/openstack-status/tasks/main.yml +++ b/roles/openstack/openstack-status/tasks/main.yml @@ -1,9 +1,12 @@ --- - name: Wait for openstack port 35357 to open sudo: no - local_action: - wait_for host={{ hostvars[inventory_hostname].ansible_ssh_host }} - port=35357 delay=10 timeout=120 + delegate_to: localhost + wait_for_ssh: + host: "{{ hostvars[inventory_hostname].ansible_host }}" + port: 35357 + delay: 10 + timeout: 120 register: wait_for_openstack - name: Fail if any of them fail diff --git a/roles/patch_rpm/tasks/pre.yml b/roles/patch_rpm/tasks/pre.yml index c887bd879..ef23f7214 100644 --- a/roles/patch_rpm/tasks/pre.yml +++ b/roles/patch_rpm/tasks/pre.yml @@ -33,10 +33,11 @@ git remote add -f patches {{ tmp_dir }}/dist-git/{{ patch.gerrit.name }}; git fetch patches; git fetch patches --tags; - git branch {{ branch_name }}-patches patches/gerrit-patch; + git branch {{ product.name }}-{{ product.version.major }}.{{ product.version.minor }}-patches patches/gerrit-patch; if [ "{{ patch.upstream is defined }}" == "True" ]; then git remote add -f upstream {{ tmp_dir }}/dist-git/{{ patch.upstream is defined and patch.upstream.name }}; git fetch upstream; + git fetch upstream --tags; fi; args: chdir: "{{ tmp_dir }}/dist-git/{{ patch.dist_git.name }}" diff --git a/roles/patch_rpm/templates/patched_rpms.j2 b/roles/patch_rpm/templates/patched_rpms.j2 index c6bcdd8b1..228a6f09a 100644 --- a/roles/patch_rpm/templates/patched_rpms.j2 +++ b/roles/patch_rpm/templates/patched_rpms.j2 @@ -1,6 +1,6 @@ [patched-rpms] name=patched component rpms -baseurl=file:///home/{{ ansible_ssh_user }}/dist-git/{{ patch.dist_git.name }} +baseurl=file:///home/{{ ansible_user }}/dist-git/{{ patch.dist_git.name }} enabled=1 gpgcheck=0 priority=1 diff --git a/roles/system/set_hostname/tasks/main.yml b/roles/system/set_hostname/tasks/main.yml new file mode 100644 index 000000000..9c6ad8e8d --- /dev/null +++ b/roles/system/set_hostname/tasks/main.yml @@ -0,0 +1,33 @@ +--- +- name: Configure hostname + hostname: + name: "{{ inventory_hostname }}" + register: newhostname + +- name: Ensure hostname is in /etc/hosts + lineinfile: + dest: /etc/hosts + regexp: '.*{{ inventory_hostname }}$' + line: "{{ ansible_default_ipv4.address }} {{inventory_hostname}}" + state: present + when: ansible_default_ipv4.address is defined + +- name: check for cloud.cfg + stat: + path: /etc/cloud/cloud.cfg + register: cloudcfg + when: newhostname|changed + +- name: Prevent cloud-init from controlling hostname + lineinfile: + dest: /etc/cloud/cloud.cfg + regexp: "^preserve_hostname:" + line: "preserve_hostname: true" + when: newhostname|changed and cloudcfg.stat.exists + +- name: restart systemd-hostnamed + service: + name: systemd-hostnamed + state: restarted + when: ansible_distribution_version|int > 6 + diff --git a/settings/installer/packstack/storage/image/backend/ceph.yml b/settings/installer/packstack/storage/image/backend/ceph.yml index d32cde301..1f9bdcd1e 100644 --- a/settings/installer/packstack/storage/image/backend/ceph.yml +++ b/settings/installer/packstack/storage/image/backend/ceph.yml @@ -3,7 +3,7 @@ nodes: controller: packages: - - ceph-common + default: ceph-common storage: services: diff --git a/settings/installer/packstack/storage/volume/backend/ceph.yml b/settings/installer/packstack/storage/volume/backend/ceph.yml index b3f3a9458..f6081eb9f 100644 --- a/settings/installer/packstack/storage/volume/backend/ceph.yml +++ b/settings/installer/packstack/storage/volume/backend/ceph.yml @@ -3,7 +3,7 @@ nodes: controller: packages: - - ceph-common + default: ceph-common storage: services: diff --git a/settings/installer/packstack/storage/volume/backend/gluster.yml b/settings/installer/packstack/storage/volume/backend/gluster.yml index 4b22ea12c..1163eb570 100644 --- a/settings/installer/packstack/storage/volume/backend/gluster.yml +++ b/settings/installer/packstack/storage/volume/backend/gluster.yml @@ -3,7 +3,7 @@ nodes: controller: packages: - - glusterfs-fuse + default: glusterfs-fuse storage: services: diff --git a/settings/installer/packstack/storage/volume/backend/thinlvm.yml b/settings/installer/packstack/storage/volume/backend/thinlvm.yml index 6cece867b..6ca300728 100644 --- a/settings/installer/packstack/storage/volume/backend/thinlvm.yml +++ b/settings/installer/packstack/storage/volume/backend/thinlvm.yml @@ -1,5 +1,11 @@ --- !extends:common/shared.yml +nodes: + controller: + packages: + default: targetcli + '6.0': scsi-target-utils + storage: services: - cinder @@ -17,6 +23,7 @@ storage: rhos-6-thinlvm: volume_driver: "cinder.volume.drivers.lvm.LVMISCSIDriver" lvm_type: "thin" + iscsi_helper: "lioadm" "5.0": *cinder_cfg_old "6.0": diff --git a/settings/installer/packstack/storage/volume/backend/xtremio_fc.yml b/settings/installer/packstack/storage/volume/backend/xtremio_fc.yml index 89979f569..2a68cca85 100644 --- a/settings/installer/packstack/storage/volume/backend/xtremio_fc.yml +++ b/settings/installer/packstack/storage/volume/backend/xtremio_fc.yml @@ -12,7 +12,7 @@ storage: "icehouse": &cinder_cfg DEFAULT: - volume_driver: "cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver" + volume_driver: "cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver" san_ip: !lookup private.storage.volume.backend.xtremio.san_ip san_login: !lookup private.storage.volume.backend.xtremio.san_login san_password: !lookup private.storage.volume.backend.xtremio.san_password diff --git a/settings/installer/rdo_manager.yml b/settings/installer/rdo_manager.yml index 0a3e55719..78d978cc6 100644 --- a/settings/installer/rdo_manager.yml +++ b/settings/installer/rdo_manager.yml @@ -44,6 +44,7 @@ job: - /home/stack/*.log - /home/stack/*.json - /home/stack/*.conf + - /home/stack/*.yml - /home/stack/deploy-overcloudrc - /home/stack/network-environment.yaml - /home/stack/tempest/*.xml @@ -60,3 +61,6 @@ defaults: tempest_skip_file: none updates: none custom_deploy: none + introspection_method: bulk + proxy: none + ssl: "off" diff --git a/settings/installer/rdo_manager/flavor/justright.yml b/settings/installer/rdo_manager/flavor/justright.yml index 70ac642a3..ecd301320 100644 --- a/settings/installer/rdo_manager/flavor/justright.yml +++ b/settings/installer/rdo_manager/flavor/justright.yml @@ -3,5 +3,6 @@ installer: nodes: node_mem: 6144 node_cpu: 1 + node_disk: 50 undercloud_node_mem: 8192 undercloud_node_cpu: 4 diff --git a/settings/installer/rdo_manager/images/import.yml b/settings/installer/rdo_manager/images/import.yml index caf649e81..cdc272599 100644 --- a/settings/installer/rdo_manager/images/import.yml +++ b/settings/installer/rdo_manager/images/import.yml @@ -9,7 +9,6 @@ installer: - overcloud-full '8-director': files: - - deploy-ramdisk-ironic - ironic-python-agent - overcloud-full url: @@ -21,6 +20,7 @@ installer: ga: '7.0': !lookup private.installer.images.rhos.7_director.GA.7.0 '7.1': !lookup private.installer.images.rhos.7_director.GA.7.1 + '7.2': !lookup private.installer.images.rhos.7_director.GA.7.2 '8-director': latest: '8.0': !lookup private.installer.images.rhos.8_director.latest.8.0 diff --git a/settings/installer/rdo_manager/introspection_method/bulk.yml b/settings/installer/rdo_manager/introspection_method/bulk.yml new file mode 100644 index 000000000..09ca68635 --- /dev/null +++ b/settings/installer/rdo_manager/introspection_method/bulk.yml @@ -0,0 +1,2 @@ +installer: + introspection_method: bulk diff --git a/settings/installer/rdo_manager/introspection_method/node_by_node.yml b/settings/installer/rdo_manager/introspection_method/node_by_node.yml new file mode 100644 index 000000000..834f63260 --- /dev/null +++ b/settings/installer/rdo_manager/introspection_method/node_by_node.yml @@ -0,0 +1,2 @@ +installer: + introspection_method: node_by_node diff --git a/settings/installer/rdo_manager/network/neutron/isolation/bond_with_vlans.yml b/settings/installer/rdo_manager/network/neutron/isolation/bond_with_vlans.yml index 6c2cabd9a..9a8d36203 100644 --- a/settings/installer/rdo_manager/network/neutron/isolation/bond_with_vlans.yml +++ b/settings/installer/rdo_manager/network/neutron/isolation/bond_with_vlans.yml @@ -2,3 +2,4 @@ installer: network: isolation: bond_with_vlans + protocol: ipv4 diff --git a/settings/installer/rdo_manager/network/neutron/isolation/default.yml b/settings/installer/rdo_manager/network/neutron/isolation/default.yml index 780e512b0..3c7ec08b0 100644 --- a/settings/installer/rdo_manager/network/neutron/isolation/default.yml +++ b/settings/installer/rdo_manager/network/neutron/isolation/default.yml @@ -2,3 +2,4 @@ installer: network: isolation: default + protocol: ipv4 diff --git a/settings/installer/rdo_manager/network/neutron/isolation/none.yml b/settings/installer/rdo_manager/network/neutron/isolation/none.yml index 9d3b5a541..d44348689 100644 --- a/settings/installer/rdo_manager/network/neutron/isolation/none.yml +++ b/settings/installer/rdo_manager/network/neutron/isolation/none.yml @@ -2,3 +2,4 @@ installer: network: isolation: none + protocol: ipv4 diff --git a/settings/installer/rdo_manager/network/neutron/isolation/single_nic_vlans.yml b/settings/installer/rdo_manager/network/neutron/isolation/single_nic_vlans.yml index 240edc22e..6e3f4fa1b 100644 --- a/settings/installer/rdo_manager/network/neutron/isolation/single_nic_vlans.yml +++ b/settings/installer/rdo_manager/network/neutron/isolation/single_nic_vlans.yml @@ -2,3 +2,4 @@ installer: network: isolation: single_nic_vlans + protocol: ipv4 diff --git a/settings/installer/rdo_manager/network/neutron/isolation/single_nic_vlans_ipv6.yml b/settings/installer/rdo_manager/network/neutron/isolation/single_nic_vlans_ipv6.yml new file mode 100644 index 000000000..88f7efda2 --- /dev/null +++ b/settings/installer/rdo_manager/network/neutron/isolation/single_nic_vlans_ipv6.yml @@ -0,0 +1,5 @@ +--- +installer: + network: + isolation: single_nic_vlans_ipv6 + protocol: ipv6 diff --git a/settings/installer/rdo_manager/network/neutron/variant/vlan.yml b/settings/installer/rdo_manager/network/neutron/variant/vlan.yml new file mode 100644 index 000000000..e54983d08 --- /dev/null +++ b/settings/installer/rdo_manager/network/neutron/variant/vlan.yml @@ -0,0 +1,3 @@ +installer: + network: + variant: vlan diff --git a/settings/installer/rdo_manager/proxy/none.yml b/settings/installer/rdo_manager/proxy/none.yml new file mode 100644 index 000000000..89107df60 --- /dev/null +++ b/settings/installer/rdo_manager/proxy/none.yml @@ -0,0 +1,6 @@ +--- +installer: + proxy: 'none' + http_proxy_host: '' + http_proxy_port: '' + http_proxy_url: '' diff --git a/settings/installer/rdo_manager/ssl/off.yml b/settings/installer/rdo_manager/ssl/off.yml new file mode 100644 index 000000000..1df3e36ec --- /dev/null +++ b/settings/installer/rdo_manager/ssl/off.yml @@ -0,0 +1,2 @@ +installer: + ssl: false diff --git a/settings/installer/rdo_manager/ssl/on.yml b/settings/installer/rdo_manager/ssl/on.yml new file mode 100644 index 000000000..3284cbaa1 --- /dev/null +++ b/settings/installer/rdo_manager/ssl/on.yml @@ -0,0 +1,2 @@ +installer: + ssl: true diff --git a/settings/installer/rdo_manager/topology/ha.yml b/settings/installer/rdo_manager/topology/ha.yml new file mode 100644 index 000000000..1693ee386 --- /dev/null +++ b/settings/installer/rdo_manager/topology/ha.yml @@ -0,0 +1,32 @@ +installer: + topology_name: minimal + network_restart: True + nodes: + node_count: 9 + controller: + remote_user: heat-admin + nova_list_type: controller + flavor: baremetal + scale: 3 + tester: + remote_user: root + compute: + type: Compute + nova_list_type: compute + flavor: baremetal + scale: 3 + blockstorage: + type: Cinder-Storage + nova_list_type: cinderstorage + flavor: baremetal + scale: 0 + swiftstorage: + type: Swift-Storage + nova_list_type: swiftstorage + flavor: baremetal + scale: 0 + cephstorage: + type: Ceph-Storage + nova_list_type: cephstorage + flavor: baremetal + scale: 3 diff --git a/settings/product/rdo/version/juno/build/latest.yml b/settings/product/rdo/version/juno/build/latest.yml index d3f106168..b85b62d68 100644 --- a/settings/product/rdo/version/juno/build/latest.yml +++ b/settings/product/rdo/version/juno/build/latest.yml @@ -3,5 +3,3 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rdo/version/kilo/build/latest.yml b/settings/product/rdo/version/kilo/build/latest.yml index f7a3cd256..9e59095c2 100644 --- a/settings/product/rdo/version/kilo/build/latest.yml +++ b/settings/product/rdo/version/kilo/build/latest.yml @@ -4,5 +4,3 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rdo/version/liberty/build/latest.yml b/settings/product/rdo/version/liberty/build/latest.yml index fcac9d66a..61c4b599f 100644 --- a/settings/product/rdo/version/liberty/build/latest.yml +++ b/settings/product/rdo/version/liberty/build/latest.yml @@ -4,8 +4,6 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' installer: images: diff --git a/settings/product/rdo/version/liberty/repo/production.yml b/settings/product/rdo/version/liberty/repo/production.yml index e58cb5944..744d6755f 100644 --- a/settings/product/rdo/version/liberty/repo/production.yml +++ b/settings/product/rdo/version/liberty/repo/production.yml @@ -3,3 +3,7 @@ product: repo_type: production # production repo details are in liberty.yml itself # since it is needed by default + +workarounds: + rhbz1278972: + enabled: True diff --git a/settings/product/rdo/version/mitaka/build/latest.yml b/settings/product/rdo/version/mitaka/build/latest.yml index fcac9d66a..61c4b599f 100644 --- a/settings/product/rdo/version/mitaka/build/latest.yml +++ b/settings/product/rdo/version/mitaka/build/latest.yml @@ -4,8 +4,6 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' installer: images: diff --git a/settings/product/rhos/repo/common/common.yml b/settings/product/rhos/repo/common/common.yml index e348a03a8..8bb8a3696 100644 --- a/settings/product/rhos/repo/common/common.yml +++ b/settings/product/rhos/repo/common/common.yml @@ -1,6 +1,7 @@ product: rpm: !lookup private.distro.rhel.rhos_release_rpm repo: + state: pinned release: latest location: bos mirror: download.eng.{{ !lookup product.repo.location }}.redhat.com diff --git a/settings/product/rhos/repo/poodle.yml b/settings/product/rhos/repo/poodle.yml index 83adea54c..c591fffd2 100644 --- a/settings/product/rhos/repo/poodle.yml +++ b/settings/product/rhos/repo/poodle.yml @@ -4,6 +4,3 @@ product: repo: type: poodle short_type: pod - rhos_release: - extra_args: - - "-d" diff --git a/settings/product/rhos/version/5.0/build/latest.yml b/settings/product/rhos/version/5.0/build/latest.yml index d3f106168..b85b62d68 100644 --- a/settings/product/rhos/version/5.0/build/latest.yml +++ b/settings/product/rhos/version/5.0/build/latest.yml @@ -3,5 +3,3 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rhos/version/5.0/build/staypuft_50_lkg.yml b/settings/product/rhos/version/5.0/build/staypuft_50_lkg.yml index d3f106168..b85b62d68 100644 --- a/settings/product/rhos/version/5.0/build/staypuft_50_lkg.yml +++ b/settings/product/rhos/version/5.0/build/staypuft_50_lkg.yml @@ -3,5 +3,3 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rhos/version/6.0/build/latest.yml b/settings/product/rhos/version/6.0/build/latest.yml index d3f106168..b85b62d68 100644 --- a/settings/product/rhos/version/6.0/build/latest.yml +++ b/settings/product/rhos/version/6.0/build/latest.yml @@ -3,5 +3,3 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rhos/version/7.0.yml b/settings/product/rhos/version/7.0.yml index 2b70d8e1a..6e9cb6c32 100644 --- a/settings/product/rhos/version/7.0.yml +++ b/settings/product/rhos/version/7.0.yml @@ -4,3 +4,6 @@ product: major: 7 minor: 0 code_name: kilo + +workarounds: + rhbz1299563: {} diff --git a/settings/product/rhos/version/7.0/build/latest.yml b/settings/product/rhos/version/7.0/build/latest.yml index d3f106168..b85b62d68 100644 --- a/settings/product/rhos/version/7.0/build/latest.yml +++ b/settings/product/rhos/version/7.0/build/latest.yml @@ -3,5 +3,3 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rhos/version/7_director/build/ga_71.yml b/settings/product/rhos/version/7_director/build/ga_71.yml index bc102751e..6ac12a2fa 100644 --- a/settings/product/rhos/version/7_director/build/ga_71.yml +++ b/settings/product/rhos/version/7_director/build/ga_71.yml @@ -3,10 +3,10 @@ product: build: ga build_version: ga-7.1 repo: - puddle_pin_version: 'GA' - poodle_pin_version: 'GA' + puddle_pin_version: 'Z2' + poodle_pin_version: 'Z2' core_product_version: 7 - puddle_director_pin_version: 'GA' + puddle_director_pin_version: 'Y1' installer: images: diff --git a/settings/product/rhos/version/7_director/build/ga_72.yml b/settings/product/rhos/version/7_director/build/ga_72.yml new file mode 100644 index 000000000..d5809464d --- /dev/null +++ b/settings/product/rhos/version/7_director/build/ga_72.yml @@ -0,0 +1,13 @@ +--- +product: + build: ga + build_version: ga-7.2 + repo: + puddle_pin_version: 'Z2' + poodle_pin_version: 'Z2' + core_product_version: 7 + puddle_director_pin_version: 'Y3' + +installer: + images: + version: '7.2' diff --git a/settings/product/rhos/version/7_director/build/last_known_good.yml b/settings/product/rhos/version/7_director/build/last_known_good.yml index 7492edbe9..c5c522dd4 100644 --- a/settings/product/rhos/version/7_director/build/last_known_good.yml +++ b/settings/product/rhos/version/7_director/build/last_known_good.yml @@ -6,8 +6,6 @@ product: puddle_pin_version: 'latest' puddle_director_pin_version: '2015-10-16.1' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' installer: images: diff --git a/settings/product/rhos/version/7_director/build/latest.yml b/settings/product/rhos/version/7_director/build/latest.yml index 82e9bde1e..5bad17fdb 100644 --- a/settings/product/rhos/version/7_director/build/latest.yml +++ b/settings/product/rhos/version/7_director/build/latest.yml @@ -6,8 +6,6 @@ product: puddle_pin_version: 'latest' puddle_director_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' installer: images: diff --git a/settings/product/rhos/version/8.0/build/latest.yml b/settings/product/rhos/version/8.0/build/latest.yml index d3f106168..b85b62d68 100644 --- a/settings/product/rhos/version/8.0/build/latest.yml +++ b/settings/product/rhos/version/8.0/build/latest.yml @@ -3,5 +3,3 @@ product: repo: puddle_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rhos/version/8_director/build/last_known_good.yml b/settings/product/rhos/version/8_director/build/last_known_good.yml index 0be886089..759cab59d 100644 --- a/settings/product/rhos/version/8_director/build/last_known_good.yml +++ b/settings/product/rhos/version/8_director/build/last_known_good.yml @@ -6,5 +6,3 @@ product: puddle_pin_version: 'latest' puddle_director_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' diff --git a/settings/product/rhos/version/8_director/build/latest.yml b/settings/product/rhos/version/8_director/build/latest.yml index fd08f7249..ac120b648 100644 --- a/settings/product/rhos/version/8_director/build/latest.yml +++ b/settings/product/rhos/version/8_director/build/latest.yml @@ -6,8 +6,6 @@ product: puddle_pin_version: 'latest' puddle_director_pin_version: 'latest' poodle_pin_version: 'latest' - foreman_pin_version: 'latest' - foreman_poodle_pin_version: 'latest' installer: images: diff --git a/settings/provisioner/beaker/site/bkr.yml b/settings/provisioner/beaker/site/bkr.yml index d4d126d6c..3064f05ce 100644 --- a/settings/provisioner/beaker/site/bkr.yml +++ b/settings/provisioner/beaker/site/bkr.yml @@ -1,7 +1,9 @@ provisioner: beaker_checkout_script: 'khaleesi-settings/scripts/beaker/beakerCheckOut.sh' host_lab_controller: !env [BEAKER_HOST_CONTROLLER, lab-02.rhts.eng.brq.redhat.com] - whiteboard_message: 'InstackTesting' + whiteboard_prefix: 'InstackTesting' + whiteboard_triggering_job: !env BUILD_URL + whiteboard_message: '{{ !lookup provisioner.whiteboard_prefix }},triggered_from:{{ !lookup provisioner.whiteboard_triggering_job }}' network: public_subnet_cidr: 172.17.0.0/16 public_allocation_start: 172.17.0.200 diff --git a/settings/provisioner/manual/topology/single_node.yml b/settings/provisioner/manual/topology/single_node.yml new file mode 100644 index 000000000..02b3797ea --- /dev/null +++ b/settings/provisioner/manual/topology/single_node.yml @@ -0,0 +1,10 @@ +provisioner: + nodes: + host0: + name: host0 + remote_user: root + hostname: "{{ lookup('env', 'TEST_MACHINE') }}" + groups: + - provisioned + - controller + - tester diff --git a/settings/provisioner/openstack/site/qeos7/tenant/rhos-qe-ci.yml b/settings/provisioner/openstack/site/qeos7/tenant/rhos-qe-ci.yml index 20624be12..5b04bc896 100644 --- a/settings/provisioner/openstack/site/qeos7/tenant/rhos-qe-ci.yml +++ b/settings/provisioner/openstack/site/qeos7/tenant/rhos-qe-ci.yml @@ -36,8 +36,15 @@ provisioner: allocation_pool_end: 172.31.1.100 flavor: - small: 2 - medium: 3 + # The list of flavor should be rechecked; right now, 3 is the smaller + # with a disk big enough for relevant images. 4 is a bit bigger, so + # use it for medium, and keep it for large, as it quite matches the + # values for the old qeos. Other flavor should be cleaned up (also + # because they are mostly unused right now). + # The recheck could lead to a bigger change to flavor definitions + # (see CENTRALCI-1189) + small: 3 + medium: 4 large: 4 large_testing: c6e0ad85-81a8-4fbb-a2d9-b0abac52f79b large_ephemeral: a89c1587-aab2-49c2-a60d-4d19ea40bdbc diff --git a/settings/provisioner/openstack/topology/all-in-one-odl.yml b/settings/provisioner/openstack/topology/all-in-one-odl.yml new file mode 100644 index 000000000..3f56753ac --- /dev/null +++ b/settings/provisioner/openstack/topology/all-in-one-odl.yml @@ -0,0 +1,46 @@ +--- +provisioner: + nodes: + controller: &controller + name: '{{ tmp.node_prefix }}controller' + hostname: + rebuild: no + flavor_id: !lookup provisioner.flavor.large + image_id: !lookup provisioner.images[ !lookup distro.name ][ !lookup distro.full_version ].id + remote_user: !lookup provisioner.images[ !lookup distro.name ][ !lookup distro.full_version ].remote_user + network: &network_params + interfaces: &interfaces + data: &data_interface + label: eth1 + config_params: &data_interface_params + bootproto: static + ipaddr: 10.0.0.1 + netmask: 255.255.255.0 + nm_controlled: "no" + type: ethernet + onboot: yes + device: !lookup provisioner.nodes.controller.network.interfaces.data.label + external: &external_interface + label: eth2 + groups: + - controller + - network + - compute + - openstack_nodes + + odl_controller: + <<: *controller + name: '{{ tmp.node_prefix }}odl_controller' + network: + <<: *network_params + interfaces: + <<: *interfaces + data: + <<: *data_interface + config_params: + <<: *data_interface_params + ipaddr: 10.0.0.2 + + groups: + - odl_controller + - openstack_nodes diff --git a/settings/tester/api.yml b/settings/tester/api.yml index 9d86689f8..89a0c785d 100644 --- a/settings/tester/api.yml +++ b/settings/tester/api.yml @@ -9,6 +9,7 @@ tester: - username: 'demo' tenant_name: 'demo' password: 'secrete' + roles: [] node: prefix: diff --git a/settings/tester/functional.yml b/settings/tester/functional.yml index d6585d47e..f6e66fc86 100644 --- a/settings/tester/functional.yml +++ b/settings/tester/functional.yml @@ -5,6 +5,8 @@ tester: short_name: func component: config_file: jenkins-config.yml + tox_target: dsvm-functional + node: prefix: - !lookup tester.short_name diff --git a/settings/tester/functional/component/neutron-fwaas.yml b/settings/tester/functional/component/neutron-fwaas.yml deleted file mode 100644 index 2295b1149..000000000 --- a/settings/tester/functional/component/neutron-fwaas.yml +++ /dev/null @@ -1,6 +0,0 @@ -tester: - component: - name: neutron-fwaas - short_name: ntrn-fw - dir: !join [ !env WORKSPACE, /neutron-fwaas] - tox_target: dsvm-functional diff --git a/settings/tester/functional/component/neutron-lbaas.yml b/settings/tester/functional/component/neutron-lbaas.yml deleted file mode 100644 index 2213ad459..000000000 --- a/settings/tester/functional/component/neutron-lbaas.yml +++ /dev/null @@ -1,6 +0,0 @@ -tester: - component: - name: neutron-lbaas - short_name: ntrn-lb - dir: !join [ !env WORKSPACE, /neutron-lbaas] - tox_target: dsvm-functional diff --git a/settings/tester/functional/component/neutron-vpnaas.yml b/settings/tester/functional/component/neutron-vpnaas.yml deleted file mode 100644 index 871e7f89a..000000000 --- a/settings/tester/functional/component/neutron-vpnaas.yml +++ /dev/null @@ -1,6 +0,0 @@ -tester: - component: - name: neutron-vpnaas - short_name: ntrn-vpn - dir: !join [ !env WORKSPACE, /neutron-vpnaas] - tox_target: dsvm-functional diff --git a/settings/tester/functional/component/neutron.yml b/settings/tester/functional/component/neutron.yml deleted file mode 100644 index a42dbf67a..000000000 --- a/settings/tester/functional/component/neutron.yml +++ /dev/null @@ -1,6 +0,0 @@ -tester: - component: - name: neutron - short_name: ntrn - dir: !join [ !env WORKSPACE, /neutron] - tox_target: dsvm-functional diff --git a/settings/tester/functional/component/python-neutronclient.yml b/settings/tester/functional/component/python-neutronclient.yml deleted file mode 100644 index eefb523c9..000000000 --- a/settings/tester/functional/component/python-neutronclient.yml +++ /dev/null @@ -1,6 +0,0 @@ -tester: - component: - name: python-neutronclient - short_name: py-ntrnclnt - dir: !join [ !env WORKSPACE, /python-neutronclient] - tox_target: functional diff --git a/settings/tester/integration.yml b/settings/tester/integration.yml index 0d8cdf41b..999e90960 100644 --- a/settings/tester/integration.yml +++ b/settings/tester/integration.yml @@ -1,7 +1,3 @@ --- !extends:common.yml tester: type: integration - accounts: - - username: 'demo' - tenant_name: 'demo' - password: 'secrete' diff --git a/settings/tester/integration/component/horizon.yml b/settings/tester/integration/component/horizon.yml index b6be4db50..2c4bdd6da 100644 --- a/settings/tester/integration/component/horizon.yml +++ b/settings/tester/integration/component/horizon.yml @@ -41,7 +41,6 @@ tester: - python-virtualenv - firefox - unzip - - git - python-keystoneclient - xorg-x11-server-Xvfb - xorg-x11-font* @@ -56,3 +55,8 @@ tester: - selenium==2.45.0 - nose - testtools + accounts: + - username: 'demo' + tenant_name: 'demo' + password: 'redhat' + roles: [] diff --git a/settings/tester/tempest/tests/cinder_full.yml b/settings/tester/tempest/tests/cinder_full.yml index 94d1079d2..e6b7e7f37 100644 --- a/settings/tester/tempest/tests/cinder_full.yml +++ b/settings/tester/tempest/tests/cinder_full.yml @@ -6,5 +6,9 @@ tester: tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_multiple_security_groups, tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks, tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os, - tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern] - + tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern, + tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete, # 7,6,5 + tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete, # 7,6,5 + tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario, # only 7 + tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup, # only 6 and 5 + tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks] # only 5 diff --git a/settings/tester/tempest/tests/neutron_opendaylight.yml b/settings/tester/tempest/tests/neutron_opendaylight.yml new file mode 100644 index 000000000..20ae5e640 --- /dev/null +++ b/settings/tester/tempest/tests/neutron_opendaylight.yml @@ -0,0 +1,75 @@ +tester: + tempest: + test_regex: tempest\.api\.network\|tempest\.scenario\.*network + whitelist: [] + blacklist: [tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_in_tenant_traffic, + tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_multiple_security_groups, + tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks, + tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os, + tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os, + tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_allocation_pools, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_gw_and_allocation_pools, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_slaac_subnet_with_ports, + tempest.api.network.test_allowed_address_pair.AllowedAddressPairIpV6TestJSON, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful_fixedips, + tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestJSON.test_create_port_binding_ext_attr, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_default_gw, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful_router, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_invalid_options, + tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_bulk_port, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_two_subnets, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6, + tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_update_delete_port, + tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_update_port_with_second_ip, + tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_security_group_and_extra_attributes, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_delete_subnet_with_allocation_pools, + tempest.api.network.test_extra_dhcp_options.ExtraDHCPOptionsIpV6TestJSON.test_create_list_port_with_extra_dhcp_options, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_delete_network_with_subnet, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_delete_subnet_with_default_gw, + tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestJSON.test_update_port_binding_ext_attr, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_host_routes_and_dns_nameservers, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_update_subnet_gw_dns_host_routes_dhcp, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_delete_subnet_with_host_routes_and_dns_nameservers, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_list_subnet_with_no_gw64_one_network, + tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test, + tempest.api.network.test_ports.PortsTestJSON.test_create_port_with_no_securitygroups, + tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_two_security_groups_and_extra_attributes, + tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_port_security_disable_security_group, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_stateless_subnet_with_ports, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_all_attributes, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_list_subnet_with_no_gw64_one_network, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_64_subnet, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_v6_attributes_stateless, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_update_delete_network_subnet, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_delete_subnet_with_gw, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_eui64, + tempest.api.network.test_routers.RoutersIpV6Test, + tempest.api.network.test_networks.NetworksIpV6TestAttrs, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_delete_subnet_all_attributes, + tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_in_allowed_allocation_pools, + tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_with_no_securitygroups, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_delete_network_with_subnet, + tempest.api.network.test_networks.NetworksIpV6TestJSON.test_update_subnet_gw_dns_host_routes_dhcp, + tempest.api.network.test_networks.NetworksIpV6TestJSON, + tempest.api.network.test_ports.PortsIpV6TestJSON.test_update_port_with_security_group_and_extra_attributes, + tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra_no_dhcp, + tempest.api.network.test_networks.BulkNetworkOpsIpV6TestJSON.test_bulk_create_delete_network, + tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_cross_tenant_traffic, + tempest.api.network.test_routers.RoutersIpV6Test.test_update_router_set_gateway_with_snat_explicit, + tempest.api.network.test_networks.BulkNetworkOpsIpV6TestJSON.test_bulk_create_delete_subnet, + tempest.api.network.test_networks.BulkNetworkOpsIpV6TestJSON.test_bulk_create_delete_port, + tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_v6_attributes_slaac, + tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_port_update_new_security_group, + tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless, + tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac, + tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_slaac_from_os, + tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_dhcpv6_stateless, + tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac, + tempest.scenario.test_security_groups_cross_hosts.TestCrossHost, + tempest.api.network.test_ports.PortsIpV6TestJSON.test_port_list_filter_by_router_id, + tempest.api.network.test_ports.PortsIpV6TestJSON, + tempest.api.network.test_extra_dhcp_options.ExtraDHCPOptionsIpV6TestJSON, + tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestJSON.test_list_ports_binding_ext_attr, + tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os]