File ardana-extensions-nsx-9.0+git.1568830037.2eea267.obscpio of Package ardana-extensions-nsx

07070100000000000081A40000000000000000000000015D82725500000084000000000000000000000000000000000000003C00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/.gitreview[gerrit]
host=gerrit.suse.provo.cloud
port=29418
project=ardana/ardana-extensions-nsx.git
defaultbranch=master
defaultremote=ardana
07070100000001000081A40000000000000000000000015D8272550000279F000000000000000000000000000000000000003900000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/LICENSE
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

07070100000002000081A40000000000000000000000015D8272550000084A000000000000000000000000000000000000003B00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/README.md(c) Copyright 2018 SUSE LLC

THIRD-PARTY IMPORT TO INCORPORATE THE VMWARE-NSXV DRIVER TO NEUTRON

To incorporate the NSX-V or NSX-T into neutron, perform the steps below to deploy
your cloud. There is no need to rebuild the neutron venv. It already includes the
vmware-nsx and its dependent python packages.

1. $ mkdir ~/third-party/vmware
   $ cp -R /usr/share/ardana/ansible/vmware ~/third-party/vmware/ansible

2. $ cd ~/openstack/ardana/ansible
 . $ ansible-playbook -i hosts/localhost third-party-import.yml

3. Modify the input model to add vmware-nsxv or vmware-nsxt component and remove
   the neutron components not needed for NSX.  Some of the items need to change
   in control_plane.yml are:
   - insert vmware-nsxv or vmware-nsxt after neutron-server
   - remove the following neutron components:
       - neutron-dhcp-agent
       - neutron-openvswitch-agent
       - neutron-l2gateway-agent
       - neutron-l3-agent
       - neutron-vpn-agent
       - neutron-lbaas-agent
       - neutron-lbaasv2-agent
       - neutron-metadata-agent
       - neutron-ml2-plugin
       - neutron-ovsvapp-agent
       - neutron-sriov-nic-agent
   - For NSX-T, insert vmware-nsxt-node after nova-compute-kvm

   These changes can be found in the sample input model in
   either directory:
      /usr/share/ardana/input-model/2.0/examples/vmware/entry-scale-nsxv
      /usr/share/ardana/input-model/2.0/examples/vmware/entry-scale-nsxt

4. In your input model, create the NSX config-data file
   (~/openstack/my_cloud/definition/data/nsx/nsx_config.yml) and the
   pass_through file (~/openstack/my_cloud/definition/data/pass_through.yml)
   with the information about the ESX servers, credentials, cluster info, etc).
   The corresponding files in the sample input model should server as a
   template.

5. Use git commit to save the changes to the input model.

6. $ cd ~/openstack/ardana/ansible
   $ ansible-playbook -i hosts/localhost config-processor-run.yml
   $ ansible-playbook -i hosts/localhost ready-deployment.yml
   $ cd ~/scratch/ansible/next/ardana/ansible
   $ ansible-playbook -i hosts/verb_hosts site.yml
07070100000003000041ED0000000000000000000000055D82725500000000000000000000000000000000000000000000003800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware07070100000004000041ED0000000000000000000000055D82725500000000000000000000000000000000000000000000004000000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible07070100000005000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000004700000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/config07070100000006000081A40000000000000000000000015D8272550000039F000000000000000000000000000000000000005800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/config/nsx-symlinks.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#

# The following relative symlinks are created under the
# ~/openstack/my_cloud/config/vmware-nsx/ directory. Users are permitted
# to make customizations to the config file templates defined there.
---
symlinks:
  "vmware-nsx/nsxv.ini.j2": "roles/vmware-nsx/templates/nsxv.ini.j2"
  "vmware-nsx/nsxt.ini.j2": "roles/vmware-nsx/templates/nsxt.ini.j2"
07070100000007000041ED0000000000000000000000035D82725500000000000000000000000000000000000000000000004800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/hooks.d07070100000008000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000004F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/hooks.d/vmware07070100000009000081A40000000000000000000000015D8272550000030E000000000000000000000000000000000000006700000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/hooks.d/vmware/post-clients-deploy.yml#
# (c) Copyright 2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
# This playbook is inserted by ready-deployment.yml into ardana-deploy.yml
# immediately after the line
#
#   - include: clients-deploy.yml

- include: "{{ playbook_dir }}/nsx-neutronclient-deploy.yml"
0707010000000A000081A40000000000000000000000015D8272550000030E000000000000000000000000000000000000006800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/hooks.d/vmware/post-clients-upgrade.yml#
# (c) Copyright 2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
# This playbook is inserted by ready-deployment.yml into ardana-upgrade.yml
# immediately after the line
#   - include: clients-upgrade.yml

- include: "{{ playbook_dir }}/nsx-neutronclient-deploy.yml"
0707010000000B000081A40000000000000000000000015D8272550000030A000000000000000000000000000000000000006300000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/hooks.d/vmware/pre-nova-deploy.yml#
# (c) Copyright 2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
# This playbook is inserted by ready-deployment.yml into ardana-deploy.yml
# immediately after the line
#
#   - include: clients-deploy.yml

- include: "{{ playbook_dir }}/nsxt-nodes-configure.yml"
0707010000000C000081A40000000000000000000000015D827255000002BD000000000000000000000000000000000000005D00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/nsx-neutronclient-deploy.yml#
# (c) Copyright 2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---

- hosts: NEU-CLI
  roles:
    - vmware-nsx
  tasks:
    - include: roles/vmware-nsx/tasks/nsx-neutronclient-install.yml
0707010000000D000081A40000000000000000000000015D82725500000353000000000000000000000000000000000000005900000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/nsxt-nodes-configure.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---

- hosts: VMW-NSXT-NODE
  roles:
    - network_interface
    - vmware-nsx
  tasks:
    - include: roles/vmware-nsx/tasks/nsxt-node-prerequisites.yml
    - include: roles/vmware-nsx/tasks/nsxt-gather-facts.yml
    - include: roles/vmware-nsx/tasks/nsxt-node-configure.yml
0707010000000E000041ED0000000000000000000000035D82725500000000000000000000000000000000000000000000004600000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles0707010000000F000041ED0000000000000000000000065D82725500000000000000000000000000000000000000000000005100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx07070100000010000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000005A00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/defaults07070100000011000081A40000000000000000000000015D82725500000312000000000000000000000000000000000000006300000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/defaults/main.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

# Default variable values for tasks using the vmware-nsx role.
---
required_neutronclient_packages: []

nsxt_node_required_packages: []

nsx_insecure: "{{ config_data | item('NSX.insecure', default='False') }}"
07070100000012000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000005700000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/tasks07070100000013000081A40000000000000000000000015D827255000002B0000000000000000000000000000000000000006000000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/tasks/main.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
- name: vmware-nsx | main | Set os-specific variables
  include_vars: "{{ ansible_os_family | lower }}.yml"
07070100000014000081A40000000000000000000000015D8272550000050F000000000000000000000000000000000000007500000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/tasks/nsx-neutronclient-install.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---

- name: vmware-nsx | nsx-neutronclient-install | Debian - Install packages
  become: yes
  apt: name={{ item }} install_recommends=no state=latest force=yes
  with_items:
    required_neutronclient_packages | default([])
  when: ansible_os_family == 'Debian'

- name: vmware-nsx | nsx-neutronclient-install | RedHat - Install packages
  become: yes
  yum: name={{ item }} state=latest
  with_items:
    required_neutronclient_packages | default([])
  when: ansible_os_family == 'RedHat'

- name: vmware-nsx | nsx-neutronclient-install | SUSE - Install packages
  become: yes
  zypper: name={{ item }} state=latest
  with_items:
    required_neutronclient_packages | default([])
  when: ansible_os_family == 'Suse'
07070100000015000081A40000000000000000000000015D82725500000749000000000000000000000000000000000000006D00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/tasks/nsxt-gather-facts.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---

- name: vmware-nsx | nsxt-gather-facts | Get transport node profile from manager
  delegate_to: localhost
  uri:
    url: "{{ config_data | item('NSX.nsx_api_managers')
          }}/api/v1/transport-node-profiles/{{
            host.pass_through.vmware_nsxt.transport_node_profile_id
          }}"
    method: GET
    user: "{{ config_data | item('NSX.nsx_api_user') }}"
    password: "{{ config_data | item('NSX.nsx_api_password') | openstack_user_password_decrypt }}"
    force_basic_auth: yes
    validate_certs: "{{ not nsx_insecure }}"
    return_content: yes
  register: nsxt_transport_node_profile

- name: vmware-nsx | nsxt-gather-facts | Get host thumbprint
  shell: >-
    awk '{print $2}' /etc/ssh/ssh_host_rsa_key.pub |
    base64 -d |
    sha256sum -b |
    sed 's/ .*$//' |
    xxd -r -p |
    base64
  register: nsxt_host_thumbprint

- name: vmware-nsx | nsxt-gather-facts | Set facts
  set_fact:
    nsxt_host_thumbprint: "{{ nsxt_host_thumbprint.stdout }}"
    nsxt_transport_node_profile: "{{ nsxt_transport_node_profile.json }}"
    nsxt_managed_interfaces: >-
      {{ nsxt_transport_node_profile.json.host_switch_spec.host_switches
          | sum(attribute='pnics', start=[])
          | map(attribute='device_name')
          | unique
          | list
      }}
07070100000016000081A40000000000000000000000015D8272550000180A000000000000000000000000000000000000006F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/tasks/nsxt-node-configure.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---

- name: vmware-nsx | nsxt-node-configure | Initialize nsxt managed interface configuration files
  copy:
    content: ""
    dest: "{{ net_path }}/{{ ifcfg_prefix }}-{{ item }}"
    force: no
    owner: root
    group: root
    mode: 0644
  become: yes
  with_items: "{{ nsxt_managed_interfaces }}"

- name: vmware-nsx | nsxt-node-configure | Build transport node request body
  set_fact:
    nsxt_transport_node: >-
      {
        "node_deployment_info": {
          "resource_type": "HostNode",
          "display_name": "{{ inventory_hostname }}",
          "ip_addresses": [
            "{{ host.bind.VMW_NSXT_NODE.ssh.ip_address }}"
          ],
          "os_type": "{{ nsxt_os_type }}",
          "host_credential": {
            "username": "{{
              host.pass_through.vmware_nsxt.username |
              default(ansible_ssh_user)
              }}",
            "password": "{{
              host.pass_through.vmware_nsxt.password |
              openstack_user_password_decrypt
              }}",
            "thumbprint": "{{ nsxt_host_thumbprint }}"
          }
        },
        "host_switch_spec" : {{ nsxt_transport_node_profile.host_switch_spec }},
        "transport_zone_endpoints": {{ nsxt_transport_node_profile.transport_zone_endpoints }}
      }

- name: vmware-nsx | nsxt-node-configure | Filter out NiocProfile
  set_fact:
    nsxt_transport_node_json: " {{
        nsxt_transport_node |
        to_json |
        regex_replace('{[^{]*NiocProfile[^}]*},?','')
      }}"

- name: vmware-nsx | nsxt-node-configure | Configure host as transport node
  delegate_to: localhost
  uri:
    url: "{{ config_data | item('NSX.nsx_api_managers') }}/api/v1/transport-nodes"
    method: POST
    HEADER_Content-Type: "application/json"
    body: "{{ nsxt_transport_node_json }}"
    user: "{{ config_data | item('NSX.nsx_api_user') }}"
    password: "{{ config_data | item('NSX.nsx_api_password') | openstack_user_password_decrypt }}"
    force_basic_auth: yes
    status_code: 201, 400
    validate_certs: "{{ not (config_data | item('NSX.insecure', default='False')) }}"
    return_content: yes
  register: nsxt_configure_result

- name: vmware-nsx | nsxt-node-configure | Check transport node configuration result
  fail:
    msg: "NSXT node configuration failed: {{ nsxt_configure_result.json.error_message }}"
  when:
    - nsxt_configure_result.status == 400
    # error_code 7014 given when transport node with same ip already exists.
    - nsxt_configure_result.json.error_code != 7014

- name: vmware-nsx | nsxt-node-configure | Get added transport node id
  set_fact:
    nsxt_transport_node_id: "{{ nsxt_configure_result.json.id }}"
  when:
    - nsxt_configure_result.status == 201

- name: vmware-nsx | nsxt-node-configure | Get existent transport node id
  set_fact:
    nsxt_transport_node_id: >-
      {{ nsxt_configure_result.json.error_message
           | regex_replace('.*([a-fA-F0-9]{8}(-[a-fA-F0-9]{4}){3}-[a-fA-F0-9]{12}).*', '\\1')
      }}
  when:
    - nsxt_configure_result.status == 400

# TODO
# We have limited capability to verify that an already configured node has
# the desired configuration, so for now, other than verifying the transport
# zone information below, we assume it does. A way to solve this problem is to
# support node configuration update, for which a filter plugin equivalent to
# ansible's 2 'combine' filter would be really helpful.
# - include: nsxt-node-update.yml
#   when:
#     - nsxt_configure_result.status == 400
#     - nsxt_configure_result.json.error_code == 7014


- name: vmware-nsx | nsxt-node-configure | Wait for transport node configuration
  delegate_to: localhost
  uri:
    url: "{{ config_data | item('NSX.nsx_api_managers')
          }}/api/v1/transport-nodes/{{
             nsxt_transport_node_id
          }}/state"
    method: GET
    user: "{{ config_data | item('NSX.nsx_api_user') }}"
    password: "{{ config_data | item('NSX.nsx_api_password') | openstack_user_password_decrypt }}"
    force_basic_auth: yes
    status_code: 200
    validate_certs: "{{ not nsx_insecure }}"
    return_content: yes
  register: nsxt_transport_node_state
  retries: 60
  delay: 10
  until:
    - nsxt_transport_node_state.json.state != "pending"
    - nsxt_transport_node_state.json.state != "in_progress"
  ignore_errors: yes

- name: vmware.nsx | nsxt-node-configure | Failed to get transport node configuration state
  fail:
    msg: "Failed to get transport node configuration state"
  when: nsxt_transport_node_state.status != 200

- name: vmware.nsx | nsxt-node-configure | Check transport node configuration state
  fail:
    msg: >-
      Transport node configuration incomplete after timeout, status:
        {{ nsxt_transport_node_state.json.state }}.
      Details: {{ nsxt_transport_node_state.json | to_nice_json}}
  when:
    - nsxt_transport_node_state.json.state != "success"

- name: vmware-nsx | nsxt-node-configure | Prepare node transport zone information
  set_fact:
    nsxt_transport_zone_config: >-
      {{ nsxt_transport_node.transport_zone_endpoints |
         map(attribute='transport_zone_id') |
         list }}
    nsxt_transport_zone_operational: >-
      {{ nsxt_transport_node_state.json.host_switch_states |
         sum(attribute='transport_zone_ids', start=[]) }}

- name: vmware-nsx | nsxt-node-configure | Check node transport zone operational status
  fail:
    msg: "Transport node is not in required transport zones: {{ nsxt_transport_zone_config }}"
  when: nsxt_transport_zone_config | difference(nsxt_transport_zone_operational) | length > 0
07070100000017000081A40000000000000000000000015D827255000003C7000000000000000000000000000000000000007300000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/tasks/nsxt-node-prerequisites.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---

- name: vmware-nsx | nsxt-node-prerequisites | Check if node is supported
  fail:
    msg: "Ardana NSX-T extension is not supported for {{ ansible_distribution }}"
  when: nsxt_os_type is undefined

- name: vmware-nsx | nsxt-node-prerequisites | Install node required packages
  become: yes
  package:
    name: "{{ item }}"
    state: present
  with_items: nsxt_node_required_packages
07070100000018000041ED0000000000000000000000035D82725500000000000000000000000000000000000000000000005B00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/templates07070100000019000081A40000000000000000000000015D82725500002403000000000000000000000000000000000000006700000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/templates/nsxt.ini.j2#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#

[nsx_v3]

# The IP address of one or more NSX Managers separated by commas. The IP address
# should be in the following form: [<scheme>://]<ip_adress>[:<port>]. If scheme
# is not provided https is used. If a port is not provided, port 80 is used for
# http and port 443 for https.
nsx_api_managers = "{{ config_data | item('NSX.nsx_api_managers') }}"

# The username used to access the for NSX Manager API.
nsx_api_user = "{{ config_data | item('NSX.nsx_api_user') }}"

# The password used to access the NSX Manager API.
nsx_api_password = "{{ config_data | item('NSX.nsx_api_password') | openstack_user_password_decrypt }}"

# The UUID or name of the default NSX overlay transport zone that is used for
# creating tunneled or isolated Neutron networks. If no physical network is
# specified when creating a logical network, this transport zone will be used by
# default.
default_overlay_tz = "{{ config_data | item('NSX.default_overlay_tz_uuid') }}"

# The UUID or name of the default tier0 router that is used for connecting to
# tier1 logical routers and configuring external networks.
default_tier0_router = "{{ config_data | item('NSX.default_tier0_router_uuid') }}"

{% set native_dhcp_metadata = config_data | item('NSX.native_dhcp_metadata', default='True') %}
#  If true, DHCP and metadata proxy services will be provided by NSX.
native_dhcp_metadata = "{{ native_dhcp_metadata }}"


{%- set metadata_mode = config_data | item('NSX.metadata_mode') %}
{% if metadata_mode is defined %}

# Acceptable values are:
# - access_network: enables a dedicated connection to the metadata proxy for
#   metadata server access via Neutron router.
# - dhcp_host_route: enables host route injection via the dhcp agent. This
#   option is only useful if running on a host that does not support namespaces
#   otherwise access_network should be used.
metadata_mode = "{{ metadata_mode }}"
{% endif %}


{%- set metadata_on_demand = config_data | item('NSX.metadata_on_demand') %}
{% if metadata_on_demand is defined %}

#  If True, an internal metadata network is created for a router only when the
# router is attached to a DHCP-disabled subnet.
metadata_on_demand = "{{ metadata_on_demand }}"
{% endif %}


{%- set metadata_proxy = config_data | item('NSX.metadata_proxy_uuid') %}
{% if metadata_proxy is defined %}

# The UUID of the NSX Metadata Proxy that is used to enable native metadata
# service. It needs to be created in NSX before starting Neutron with the NSX
# plugin.
metadata_proxy = "{{ metadata_proxy }}"
{% endif %}


{%- set dhcp_profile = config_data | item('NSX.dhcp_profile_uuid') %}
{% if dhcp_profile is defined %}

# The UUID of the NSX DHCP Profile that is used to enable native DHCP service.
# It needs to be created in NSX before starting Neutron with the NSX plugin.
dhcp_profile = "{{ dhcp_profile }}"
{% endif %}


{%- dhcp_lease_time = config_data | item('NSX.dhcp_lease_time') %}
{% if dhcp_lease_time is defined %}

# The DHCP default lease time for DHCP servers in NSX-t. If undefined, wmware-
# nsx sets it to 86400
dhcp_lease_time = {{ dhcp_lease_time }}
{% endif %}


{%- dhcp_relay_service = config_data | item('NSX.dhcp_relay_service') %}
{% if dhcp_relay_service is defined %}

# This is the name of UUID of the NSX dhcp relay service that will be used to
# enable DHCP relay on router ports.
dhcp_relay_service = {{ dhcp_relay_service }}
{% endif %}


{%- set number_of_nested_groups = config_data | item('NSX.number_of_nested_groups') %}
{% if number_of_nested_groups is defined %}

# The number of nested groups which are used by the plugin. Each Neutron
# security-groups is added to one nested group, and each nested group can
# contain a maximum of 500 security-groups, therefore, the maximum number of
# security groups that can be created is 500 * number_of_nested_groups. The
# default is 8 nested groups, which allows a maximum of 4k security-groups. To
# allow the creation of more security-groups, modify this figure.
number_of_nested_groups = "{{ number_of_nested_groups }}"
{% endif %}


{%- set dns_domain = config_data | item('NSX.dns_domain') %}
{% if dns_domain is defined %}

# Domain to use for building the hostnames.
dns_domain = "{{ dns_domain }}"
{% endif %}


{%- set default_vlan_tz = config_data | item('NSX.default_vlan_tz_uuid') %}
{% if default_vlan_tz is defined %}

# Only required when creating VLAN or flat provider networks. The UUID or name
# of the default NSX VLAN transport zone that is used for bridging between
# Neutron networks if no physical network has been specified.
default_vlan_tz = "{{ default_vlan_tz }}"
{% endif %}


{%- set default_edge_cluster = config_data | item('NSX.default_edge_cluster_uuid') %}
{% if default_edge_cluster is defined %}

# Default Edge Cluster UUID or name.
default_edge_cluster = "{{ default_edge_cluster }}"
{% endif %}


{%- set retries = config_data | item('NSX.retries') %}
{% if retries is defined %}

# The maximum number of times to retry API requests upon stale revision errors.
retries = "{{ config_data | item('NSX.retries', default='3') }}"
{% endif %}


{%- set ca_file = config_data | item('NSX.ca_file') %}
{% if ca_file is defined %}

# Specify a CA bundle file to use in verifying the NSX Manager server
# certificate. This option is ignored if "insecure" is set to True. If
# "insecure" is set to False and ca_file is unset, the system root CAs will be
# used to verify the server certificate.
ca_file = "{{ ca_file }}"
{% endif %}


{%- set insecure = config_data | item('NSX.insecure') %}
{% if insecure is defined %}

# If true, the NSX Manager server certificate is not verified. If false the CA
# bundle specified via "ca_file" will be used or if unset the default system
# root CAs will be used.
insecure = "{{ insecure }}"
{% endif %}


{%- set http_timeout = config_data | item('NSX.http_timeout') %}
{% if http_timeout is defined %}

# The time in seconds before aborting a HTTP connection to a NSX Manager.
http_timeout = "{{ http_timeout }}"
{% endif %}


{%- set http_read_timeout = config_data | item('NSX.http_read_timeout') %}
{% if http_read_timeout is defined %}

# The time in seconds before aborting a HTTP read response from a NSX Manager.
http_read_timeout = "{{ http_read_timeout }}"
{% endif %}


{%- set http_retries = config_data | item('NSX.http_retries') %}
{% if http_retries is defined %}

# Maximum number of times to retry a HTTP connection.
http_retries = "{{ http_retries }}"
{% endif %}


{%- set concurrent_connections = config_data | item('NSX.concurrent_connections') %}
{% if concurrent_connections is defined %}

# Maximum number of connection connections to each NSX Manager.
concurrent_connections = "{{ concurrent_connections }}"
{% endif %}


{%- set conn_idle_timeout = config_data | item('NSX.conn_idle_timeout') %}
{% if conn_idle_timeout is defined %}

# The amount of time in seconds to wait before ensuring connectivity to the NSX
# manager if no Manager connection has been used.
conn_idle_timeout = "{{ conn_idle_timeout }}"
{% endif %}


{%- set default_bridge_cluster = config_data | item('NSX.default_bridge_cluster_uuid') %}
{% if default_bridge_cluster is defined %}

# The UUID or name of the default NSX bridge cluster that is used to perform L2
# gateway bridging between VXLAN and VLAN networks. If the default bridge
# cluster UUID is not specified, the administrator has to manually create a L2
# gateway corresponding to an NSX Bridge Cluster using L2 gateway APIs. This
# field must be specified on one of the active Neutron servers only.
default_bridge_cluster = "{{ default_bridge_cluster }}"
{% endif %}


[NSX]


{%- set qos_peak_bw_multiplier = config_data | item('NSX.qos_peak_bw_multiplier') %}
{% if qos_peak_bw_multiplier is defined %}

# The QoS rules peak bandwidth value will be the configured maximum
# bandwidth of the QoS rule, multiplied by this value. Value must be
# bigger than 1. Default is 2.
qos_peak_bw_multiplier = "{{ qos_peak_bw_multiplier }}"
{% endif %}


[DEFAULT]


{%- if native_dhcp_metadata %}

# DHCP agent notification needs to be turned off if native DHCP is used.
dhcp_agent_notification = "False"
{% endif %}


{%- set locking_coordinator_url = config_data | item('NSX.locking_coordinator_url') %}
{% if locking_coordinator_url is defined %}

# URL for distributed locking coordination resource for lock.
locking_coordinator_url = "{{ locking_coordinator_url }}"
{% endif %}

{%- set ed_list = VMW_NSXT | get_provided_data_values('nsx_extension_drivers', default=[]) -%}
{%- if ed_list|length > 0 %}

# NSX-T specific extension drivers
nsx_extension_drivers = {{ ed_list | unique | join(',') }}
{%- endif -%}
0707010000001A000081A40000000000000000000000015D82725500000F85000000000000000000000000000000000000006700000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/templates/nsxv.ini.j2#
# (c) Copyright 2017-2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#

[nsxv]
manager_uri = "{{ config_data | item('NSX.manager_uri', default='') }}"
user = "{{ config_data | item('NSX.user', default='') }}"
password = "{{ config_data | item('NSX.password', default='') | openstack_user_password_decrypt }}"

datacenter_moid = "{{ config_data | item('NSX.datacenter_moid', default='') }}"
cluster_moid = "{{ config_data | item('NSX.cluster_moid', default='') }}"
resource_pool_id = "{{ config_data | item('NSX.resource_pool_id', default='') }}"

{%  set ds_id = config_data | item('NSX.datastore_id', default='') -%}
{%- set datastore_id_tokens = ['datastore_id', '=', "\"" + ds_id + "\""] -%}
{%- if ds_id | length > 0 -%}
   {{ datastore_id_tokens | join(' ') }}
{%- endif %}

vdn_scope_id = "{{ config_data | item('NSX.vdn_scope_id', default='') }}"

{%  set dvs_id = config_data | item('NSX.dvs_id', default='') -%}
{%- set dvs_id_tokens = ['dvs_id', '=', "\"" + dvs_id + "\""] -%}
{%- if dvs_id | length > 0 -%}
    {{ dvs_id_tokens | join(' ') }}
{%- endif %}

backup_edge_pool = {{ config_data | item('NSX.backup_edge_pool', default='service:compact:4:10,vdr:compact:4:10') }}

{%  set en_id = config_data | item('NSX.external_network', defaults='') -%}
{%- set en_id_tokens = ['external_network', '=', "\"" + en_id + "\""] -%}
{%- if en_id | length > 0 -%}
    {{ en_id_tokens | join(' ') }}
{%- endif %}

{% if VMW_NSXV is defined and VMW_NSXV.consumes_NOV_MTD is defined and NOV_MTD is defined %}
nova_metadata_port = {{ VMW_NSXV | item('consumes_NOV_MTD.vips.private.0.port') }}
nova_metadata_ips = {{ VMW_NSXV | item('consumes_NOV_MTD.vips.private.0.ip_address') }}
metadata_shared_secret = "{{ NOV_MTD | item('vars.metadata_proxy_shared_secret') }}"
{% else %}
nova_metadata_port = 8775
nova_metadata_ips = {{ NEU_SVR | item('consumes_NOV_API.vips.private.0.host') if NEU_SVR is defined }}
metadata_shared_secret = ""
{% endif %}

{%  set mnpn = config_data | item('NSX.mgt_net_proxy_netmask', default='') -%}
{%- set mnpn_tokens = ['mgt_net_proxy_netmask', '=', mnpn] -%}
{%- if mnpn | length > 0 -%}
    {{ mnpn_tokens | join(' ') }}
{%- endif %}

{%  set mnpi = config_data | item('NSX.mgt_net_proxy_ips', default='') -%}
{%- set mnpi_tokens = ['mgt_net_proxy_ips', '=', "\"" + mnpi + "\""] -%}
{%- if mnpi | length > 0 -%}
    {{ mnpi_tokens | join(' ') }}
{%- endif %}

{%  set mnm = config_data | item('NSX.mgt_net_moid', default='') -%}
{%- set mnm_tokens = ['mgt_net_moid', '=', "\"" + mnm + "\""] -%}
{%- if mnm | length > 0 -%}
    {{ mnm_tokens | join(' ') }}
{%- endif %}


{% for item in NEU_SVR.consumes_FND_MDB.members.mysql_gcomms %}
    {%- if loop.index is even and item.ardana_ansible_host != host.my_ardana_ansible_name -%}
        metadata_initializer = false
    {%- endif %}
{% endfor %}


{%  set ca_file_tokens = ['ca_file', '='] -%}
{%- do ca_file_tokens.append(config_data | item('NSX.ca_file', default='')) -%}
{%- if ca_file_tokens[2] | length > 0 -%}
    {{ ca_file_tokens | join(' ') }}
{%- endif %}

insecure = {{ config_data | item('NSX.insecure', default='True') }}

edge_ha = {{ config_data | item('NSX.edge_ha', default='False') }}
spoofguard_enabled = {{ config_data | item('NSX.spoofguard_enabled', default='True') }}
exclusive_router_appliance_size = {{ config_data | item('NSX.exclusive_router_appliance_size', default='compact') | lower }}

# Add customizations here.


# Do not add anything after this line
0707010000001B000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000006400000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/templates/policy.d0707010000001C000081A40000000000000000000000015D827255000009AA000000000000000000000000000000000000007A00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/templates/policy.d/neutron-fwaas.json.j2{#
#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
#}
{
    "shared_firewalls": "field:firewalls:shared=True",
    "shared_firewall_policies": "field:firewall_policies:shared=True",
    "shared_firewall_rules": "field:firewall_rules:shared=True",

    "create_firewall": "",
    "update_firewall": "rule:admin_or_owner",
    "delete_firewall": "rule:admin_or_owner",

    "create_firewall:shared": "rule:admin_only",
    "update_firewall:shared": "rule:admin_only",
    "delete_firewall:shared": "rule:admin_only",

    "get_firewall": "rule:admin_or_owner or rule:shared_firewalls",

    "shared_firewall_groups": "field:firewall_groups:shared=True",
    "shared_firewall_policies": "field:firewall_policies:shared=True",
    "shared_firewall_rules": "field:firewall_rules:shared=True",

    "create_firewall_group": "",
    "update_firewall_group": "rule:admin_or_owner",
    "delete_firewall_group": "rule:admin_or_owner",

    "create_firewall_group:shared": "rule:admin_only",
    "update_firewall_group:shared": "rule:admin_only",
    "delete_firewall_group:shared": "rule:admin_only",

    "get_firewall_group": "rule:admin_or_owner or rule:shared_firewall_groups",


    "create_firewall_policy": "",
    "update_firewall_policy": "rule:admin_or_owner",
    "delete_firewall_policy": "rule:admin_or_owner",

    "create_firewall_policy:shared": "rule:admin_only",
    "update_firewall_policy:shared": "rule:admin_only",
    "delete_firewall_policy:shared": "rule:admin_only",

    "get_firewall_policy": "rule:admin_or_owner or rule:shared_firewall_policies",

    "create_firewall_rule": "",
    "update_firewall_rule": "rule:admin_or_owner",
    "delete_firewall_rule": "rule:admin_or_owner",

    "create_firewall_rule:shared": "rule:admin_only",
    "update_firewall_rule:shared": "rule:admin_only",
    "delete_firewall_rule:shared": "rule:admin_only",

    "get_firewall_rule": "rule:admin_or_owner or rule:shared_firewall_rules"
}
0707010000001D000081A40000000000000000000000015D827255000004F3000000000000000000000000000000000000007400000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/templates/policy.d/routers.json.j2{#
#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
#}
{
    "create_router:distributed": "rule:admin_or_owner",
    "get_router:distributed": "rule:admin_or_owner",
    "update_router:distributed": "rule:admin_or_owner",

    "get_router:ha": "rule:admin_only",
    "create_router": "rule:regular_user",
    "create_router:external_gateway_info:enable_snat": "rule:admin_or_owner",
    "create_router:ha": "rule:admin_only",
    "get_router": "rule:admin_or_owner",
    "update_router:external_gateway_info:enable_snat": "rule:admin_or_owner",
    "update_router:ha": "rule:admin_only",
    "delete_router": "rule:admin_or_owner",

    "add_router_interface": "rule:admin_or_owner",
    "remove_router_interface": "rule:admin_or_owner",
}
0707010000001E000081A40000000000000000000000015D82725500000396000000000000000000000000000000000000007C00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/templates/policy.d/security-groups.json.j2{#
#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
#}
{
    "create_security_group:logging": "rule:admin_only",
    "update_security_group:logging": "rule:admin_only",
    "get_security_group:logging": "rule:admin_only",
    "create_security_group:provider": "rule:admin_only",
    "create_security_group:policy": "rule:admin_only",
    "update_security_group:policy": "rule:admin_only",
}
0707010000001F000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000005600000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/vars07070100000020000081A40000000000000000000000015D827255000002CD000000000000000000000000000000000000006100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/vars/debian.yml#
# (c) Copyright 2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#---

# Contains packages needed by all the vmware-nsx role, specific to debian Systems
required_neutronclient_packages:
  - python-vmware-nsx
07070100000021000081A40000000000000000000000015D8272550000048C000000000000000000000000000000000000006100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/vars/redhat.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
---
# Contains packages needed by the vmware-nsx role, specific to RedHat Systems
required_neutronclient_packages:
  - python-vmware-nsx

nsxt_os_type: "{{ (ansible_distribution | lower == 'centos') | ternary('CENTOSKVM', 'RHELKVM') }}"

nsxt_node_required_packages:
  - PyYAML
  - c-ares
  - gperftools-libs
  - initscripts
  - libev
  - libunwind
  - libvirt-libs
  - libyaml
  - python-beaker
  - python-gevent
  - python-greenlet
  - python-mako
  - python-markupsafe
  - python-netaddr
  - python-paste
  - python-tempita
  - redhat-lsb-core
  - wget

ifcfg_prefix: "{{ rhel_prefix}}"
07070100000022000081A40000000000000000000000015D827255000003C8000000000000000000000000000000000000005F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/ansible/roles/vmware-nsx/vars/suse.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
---
# packages listed here will be installed by NSX neutronclient command extensions
required_neutronclient_packages:
  - python-vmware-nsx

nsxt_os_type: SLESKVM

nsxt_node_required_packages:
  - libcap-progs
  - libvirt-libs
  - libunwind
  - lsb-release
  - lsof
  - net-tools
  - python-netaddr
  - python-PyYAML
  - python-simplejson
  - wget
  - tcpdump

ifcfg_prefix: "{{ suse_prefix}}"
07070100000023000041ED0000000000000000000000035D82725500000000000000000000000000000000000000000000004100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples07070100000024000041ED0000000000000000000000045D82725500000000000000000000000000000000000000000000004800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models07070100000025000041ED0000000000000000000000035D82725500000000000000000000000000000000000000000000005900000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt07070100000026000081A40000000000000000000000015D82725500001788000000000000000000000000000000000000006300000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/README.md
(c) Copyright 2019 SUSE LLC

Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.

## Ardana Single region Entry Scale Cloud with ESX + NSX-T Example ##

The input files in this example deploy a cloud with both ESX and KVM hypervisors
that uses NSX-T networking with has the following characteristics:

### Compute Proxy Nodes ###

- One single server that runs the Nova ESX compute proxy. There should be one
  node per ESX resource pool. The proxy nodes can be ESX virtual machines.
  When running as VMs, they should be in a HA cluster. Do not image the VMs
  serving as compute proxy nodes. Use the nodelist option with bm-reimage.yml
  playbook to avoid imaging them.

- It is assumed that vCenter and NSX-T are properly configured so that ESX
  computes networking is automatically managed without any specific
  configuration action needed by Ardana.

### NSX-T KVM transport node ###

- One KVM compute server that will be configured as transport node through NSX-T
  manager. This is indicated via the optional ***vmware-nsxt-node*** service
  component. If not specified, the transport node will be assumed as being already
  configured. If specified but already configured before deployment, it will be
  verified that the transport node is in the expected transport zone.

### Control Planes ###

- A single control plane consisting of three servers that co-host all of the
  other required openstack services.

### Deployer Node ###

This configuration runs the lifecycle-manager (formerly referred to as the
deployer) on a control plane node.  You need to include this node address
in your servers.yml definition. This function does not need a dedicated
network.

The minimum server count for this example is therefore 4 servers (Control
Plane (x3) for Openstack services + 1 activated vCenter cluster having at
least 1 host, for vCenter appliance, NSX Manager, and ESX compute proxy
VMs).

An example set of servers are defined in ***data/servers.yml***.   You will
need to modify this file to reflect your specific environment.

### Networking ###

The example requires the following networks:

IPMI/iLO network, connected to the deployer and the IPMI/iLO ports of all
servers

A pair of bonded NICs which are used by the following networks:

- EXTERNAL-API - This is the network that users will use to make requests to
                 the cloud
- INTERNAL-API - This is the network that will be used to access the
                 ESX metadata proxy servers
- MANAGEMENT - This is the network that will be used for all internal traffic
               between the cloud services and traffic between VMs on private
               networks within the cloud

Additionally, for KVM compute nodes, one or more NICs specified for
completness sake as TRUNK network that NSX-T will use for overlay networks.

The Data Center Management network, which hosts the vCenter server and the
NSX-T manager, must be reachable from the Cloud Management network so that the
controllers and compute proxy nodes can communicate with them.

An example set of networks are defined in ***data/networks.yml***.    You will
need to modify this file to reflect your environment.

The example uses the devices hed3 & hed4 as a bonded network for all services.
If you need to modify these for your environment they are defined in
***data/net_interfaces.yml***. The network devices eth3 & eth4 are renamed to
devices hed3 & hed4 using the PCI bus mappings specified in
 ***data/nic_mappings.yml***. You may need to modify the PCI bus addresses to
match your system.

### Adapting the entry-scale model to fit your environment ###

The minimum set of changes you need to make to adapt the model for your
environment are:

- Update servers.yml to list the details of your bare metal servers (i.e,
  ILO access info). You need to perform this step if you are using the Ardana
  supplied Cobber playbooks to install Linux on your servers.

- Update the networks.yml file to replace network CIDRs and VLANs with site
  specific values

- Update the nic_mappings.yml file to ensure that network devices are mapped
  to the correct physical port(s)

- Review the disk models (disks_*.yml) and confirm that the associated servers
  have the number of disks required by the disk model. The device names in the
  disk models might need to be adjusted to match the probe order of your servers.

Disk models are provided as follows:
    - DISK SET CONTROLLER: Minimum 1 disk
    - DISK SET COMPUTE NODE DISKS: This is the disks used on the ESX compute proxy
     nodes. Each node is a ESX VM.  ESX VM is expected to create 1 virtual
     disk for each VM.

- Update the net interfaces.yml file to match the server NICs used in your
  configuration. This file has a separate interface model definition for
  each of the following:
  - INTERFACE SET CONTROLLER
  - INTERFACE SET ESX-COMPUTE

*DISK_SET used by Nova compute proxy is not recommanded to modify by user*

## The NSX Configuration Data ##

The NSX Configuration data file data/nsx/nsx_config.yml contains the
information on the NSX installation needed to configure neutron to use the
NSX-T core-plugin. See the comments for the parameter descriptions.

## The pass_through.yml File ##

The ESX compute proxy needs to have the information in pass_through.yml in
order to configure itself. Additionaly, the KVM compute nodes that include
the component ***vmware-nsxt-node*** need to have information specified in
order to configure them as transport nodes in NSX-T Manager. See the comments
for the parameter descriptions.
07070100000027000081A40000000000000000000000015D82725500000995000000000000000000000000000000000000006900000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/cloudConfig.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  cloud:
    name: entry-scale-nsxt

    # The following values are used when
    # building hostnames
    hostname-data:
        host-prefix: ardana
        member-prefix: -m

    # List of ntp servers for your site
    ntp-servers:
    #    - "ntp-server1"
    #    - "ntp-server2"

    # dns resolving configuration for your site
    # refer to resolv.conf for details on each option
    dns-settings:
    #  nameservers:
    #    - name-server1
    #    - name-server2
    #    - name-server3
    #
    #  domain: sub1.example.net
    #
    #  search:
    #    - sub1.example.net
    #    - sub2.example.net
    #
    #  sortlist:
    #    - 192.168.160.0/255.255.240.0
    #    - 192.168.0.0
    #
    #  # option flags are '<name>:' to enable, remove to unset
    #  # options with values are '<name>:<value>' to set
    #
    #  options:
    #    debug:
    #    ndots: 2
    #    timeout: 30
    #    attempts: 5
    #    rotate:
    #    no-check-names:
    #    inet6:
    smtp-settings:
    #  server: mailserver.examplecloud.com
    #  port: 25
    #  timeout: 15
    # These are only needed if your server requires authentication
    #  user:
    #  password:

    # Generate firewall rules
    firewall-settings:
        enable: true
        # log dropped packets
        logging: true

    # Disc space needs to be allocated to the audit directory before enabling
    # auditing.
    # Default can be either "disabled" or "enabled". Services listed in
    # "enabled-services" and "disabled-services" override the default setting.
    audit-settings:
       audit-dir: /var/audit
       default: disabled
       #enabled-services:
       #  - keystone
       #  - barbican
       disabled-services:
         - nova
         - barbican
         - keystone
         - cinder
         - ceilometer
         - neutron
         - swift
07070100000028000041ED0000000000000000000000055D82725500000000000000000000000000000000000000000000005E00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data07070100000029000081A40000000000000000000000015D82725500001036000000000000000000000000000000000000007000000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/control_plane.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  control-planes:
    - name: control-plane-1
      control-plane-prefix: cp1
      region-name: region1
      failure-zones:
        - AZ1
        - AZ2
        - AZ3
      configuration-data:
        - DESIGNATE-CONFIG-CP1
        - NSXT-CONFIG-CP1
        - SWIFT-CONFIG-CP1
      common-service-components:
        - logging-rotate
        - logging-producer
        - monasca-agent
        - stunnel
        - lifecycle-manager-target
      clusters:
        - name: cluster1
          cluster-prefix: c1
          server-role: CONTROLLER-ROLE
          member-count: 3
          allocation-policy: strict
          service-components:
            - lifecycle-manager
            - tempest
            - ntp-server
            - swift-ring-builder
            - mysql
            - ip-cluster
            - apache2
            - keystone-api
            - keystone-client
            - rabbitmq
            - glance-api
            - glance-registry
            - glance-client
            - cinder-api
            - cinder-scheduler
            - cinder-volume
            - cinder-backup
            - cinder-client
            - nova-api
            - nova-placement-api
            - nova-scheduler
            - nova-conductor
            - nova-console-auth
            - nova-novncproxy
            - nova-client
            - neutron-server
            - vmware-nsxt
            - neutron-client
            - horizon
            - swift-proxy
            - memcached
            - swift-account
            - swift-container
            - swift-object
            - swift-client
            - heat-api
            - heat-api-cfn
            - heat-engine
            - heat-client
            - openstack-client
            - ceilometer-api
            - ceilometer-polling
            - ceilometer-agent-notification
            - ceilometer-common
            - ceilometer-client
            - zookeeper
            - kafka
            - spark
            - cassandra
            - storm
            - monasca-api
            - monasca-persister
            - monasca-notifier
            - monasca-threshold
            - monasca-client
            - monasca-transform
            - logging-server
            - ops-console-web
            - barbican-api
            - barbican-client
            - barbican-worker
            - designate-api
            - designate-central
            - designate-producer
            - designate-worker
            - designate-mdns
            - designate-client
            - bind
            - magnum-api
            - magnum-conductor
            # If NSXT native metadata is enabled, set here the same secret as
            # in the NSXT metadata proxy configuration. Or do not specify a
            # secret here and update the NSXT metadata proxy configuration
            # with the secret generated by the configuration processor.
            #- nova-metadata:
            #    metadata_proxy_shared_secret: secret

      resources:
        - name: compute
          resource-prefix: comp
          server-role: COMPUTE-ROLE
          allocation-policy: any
          min-count: 0
          service-components:
            - ntp-client
            - nova-compute
            - nova-compute-kvm
            - vmware-nsxt-node

        - name: esx-compute
          resource-prefix: esx-comp
          server-role: ESX-COMPUTE-ROLE
          allocation-policy: any
          service-components:
            - nova-esx-compute-proxy
            - nova-compute
            - ntp-client
0707010000002A000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000006800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/designate0707010000002B000081A40000000000000000000000015D82725500000379000000000000000000000000000000000000007D00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/designate/designate_config.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  configuration-data:
    - name: DESIGNATE-CONFIG-CP1
      services:
        - designate
      data:
        dns_domain: example.org.
        ns_records:
          - hostname: ns1.example.org.
            priority: 1
          - hostname: ns2.example.org.
            priority: 2
0707010000002C000081A40000000000000000000000015D8272550000049D000000000000000000000000000000000000007500000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/disks_compute_node.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  disk-models:
  - name: COMPUTE-NODE-DISKS
    # Disk model to be used for compute nodes
    # /dev/sda_root is used as a volume group for /, /var/log and /var/crash
    # Additional disks can be added to either volume group
    volume-groups:
      - name: cpn-vg
        physical-volumes:
         - /dev/sda_root
        logical-volumes:
          - name: root
            size: 80%
            fstype: ext4
            mount: /
          - name: LV_CRASH
            size: 15%
            mount: /var/crash
            fstype: ext4
            mkfs-opts: -O large_file
0707010000002D000081A40000000000000000000000015D82725500001963000000000000000000000000000000000000007700000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/disks_controller_1TB.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  disk-models:
  - name: CONTROLLER-1TB-DISKS

    # This example is based on using a single 1TB disk for a volume
    # group that contains all file systems on a controller with 64GB
    # of memory.
    #
    # Additional disks can be added to the 'physical-volumes' section.
    #
    #

    volume-groups:
      - name: ctlr-vg
        physical-volumes:

          # NOTE: 'sda_root' is a templated value. This value is checked in
          # os-config and replaced by the partition actually used on sda
          #e.g. sda1 or sda5
          - /dev/sda_root

          # Add any additional disks for the volume group here
          # -/dev/sdx
          # -/dev/sdy

        logical-volumes:
          # The policy is not to consume 100% of the space of each volume group.
          # At least 5% should be left free for snapshots. This example leaves 18%
          # free to allow for some flexibility.

          - name: root
            size: 6%
            fstype: ext4
            mount: /

          # Reserved space for kernel crash dumps
          # Should evaluate to a value that is slightly larger than
          # the memory size of your server
          - name: crash
            size: 6%
            mount: /var/crash
            fstype: ext4
            mkfs-opts: -O large_file

          # Local Log files.  Depending on your retention policy
          # log files can require significant disc space
          - name: log
            size: 16%
            mount: /var/log
            fstype: ext4
            mkfs-opts: -O large_file

          # Mysql Database.  All persistent state from OpenStack services
          # is saved here.  Although the individual objects are small the
          # accumulated data can grow over time
          - name: mysql
            size: 6%
            mount: /var/lib/mysql
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: mysql

          # Rabbitmq works mostly in memory, but needs to be able to persist
          # messages to disc under high load. This area should evaluate to a value
          # that is slightly larger than the memory size of your server
          - name: rabbitmq
            size: 7%
            mount: /var/lib/rabbitmq
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: rabbitmq
              rabbitmq_env: home

          # Database storage for event monitoring and metering data (Monasca).
          - name: cassandra_db
            size: 19%
            mount: /var/cassandra/data
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: cassandra

          - name: cassandra_log
            size: 1%
            mount: /var/cassandra/commitlog
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: cassandra

          # Messaging system for monitoring and logging.
          - name: kafka
            size: 7%
            mount: /var/kafka
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: kafka

          # Data storage for centralized logging. This holds log entries from all
          # servers in the cloud and hence can require a lot of disk space.
          - name: elasticsearch
            size: 13%
            mount: /var/lib/elasticsearch
            fstype: ext4

          # Zookeeper is used to provide cluster co-ordination in the monitoring
          # system.  Although not a high user of disc space we have seen issues
          # with zookeeper snapshots filling up filesystems so we keep it in its
          # own space for stability.
          - name: zookeeper
            size: 1%
            mount: /var/lib/zookeeper
            fstype: ext4

        consumer:
           name: os

    # Cinder: cinder volume needs temporary local filesystem space to convert
    # images to raw when creating bootable volumes. Using a separate volume
    # will both ringfence this space and avoid filling /
    # The size should represent the raw size of the largest image times
    # the number of concurrent bootable volume creations.
    # The logical volume can be part of an existing volume group or a
    # dedicated volume group.
    #  - name: cinder-vg
    #    physical-volumes:
    #      - /dev/sdx
    #    logical-volumes:
    #     - name: cinder_image
    #       size: 5%
    #       mount: /var/lib/cinder
    #       fstype: ext4

    #  Glance cache: if a logical volume with consumer usage 'glance-cache'
    #  is defined Glance caching will be enabled. The logical volume can be
    #  part of an existing volume group or a dedicated volume group.
    #  - name: glance-vg
    #    physical-volumes:
    #      - /dev/sdx
    #    logical-volumes:
    #     - name: glance-cache
    #       size: 95%
    #       mount: /var/lib/glance/cache
    #       fstype: ext4
    #       mkfs-opts: -O large_file
    #       consumer:
    #         name: glance-api
    #         usage: glance-cache

    # Audit: Audit logs can consume significant disc space.  If you
    # are enabling audit then it is recommended that you use a dedicated
    # disc.
    #  - name: audit-vg
    #    physical-volumes:
    #      - /dev/sdz
    #    logical-volumes:
    #      - name: audit
    #        size: 95%
    #        mount: /var/audit
    #        fstype: ext4
    #        mkfs-opts: -O large_file

    # Additional disk group defined for Swift
    device-groups:
      - name: swiftobj
        devices:
          - name: /dev/sdb
          - name: /dev/sdc
          # Add any additional disks for swift here
          # -name: /dev/sdd
          # -name: /dev/sde
        consumer:
          name: swift
          attrs:
            rings:
              - account
              - container
              - object-0
0707010000002E000081A40000000000000000000000015D827255000006E3000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/firewall_rules.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

#
# Ardana will create firewall rules to enable the required access for
# all of the deployed services. Use this section to define any
# additional access.
#
# Each group of rules can be applied to one or more network groups
# Examples are given for ping and ssh
#
# Names of rules, (e.g. "PING") are arbitrary and have no special significance
#

  firewall-rules:

    - name: SSH
      # network-groups is a list of all the network group names
      # that the rules apply to
      network-groups:
      - MANAGEMENT
      - INTERNAL-API
      rules:
      - type: allow
        # range of remote addresses in CIDR format that this
        # rule applies to
        remote-ip-prefix:  0.0.0.0/0
        port-range-min: 22
        port-range-max: 22
        # protocol must be one of: null, tcp, udp or icmp
        protocol: tcp

    - name: PING
      network-groups:
      - MANAGEMENT
      - EXTERNAL-API
      - INTERNAL-API
      rules:
      # open ICMP echo request (ping)
      - type: allow
        remote-ip-prefix:  0.0.0.0/0
        # icmp type
        port-range-min: 8
        # icmp code
        port-range-max: 0
        protocol: icmp

0707010000002F000081A40000000000000000000000015D82725500000926000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/net_interfaces.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  interface-models:
      #
      # Edit the device names and bond options
      # to match your environment
      #
    - name: CONTROLLER-INTERFACES
      network-interfaces:
        - name: BOND0
          device:
              name: bond0
          bond-data:
              options:
                  mode: active-backup
                  miimon: 200
                  primary: hed3
              provider: linux
              devices:
                - name: hed3
                - name: hed4
          network-groups:
            - MANAGEMENT
            - EXTERNAL-API
            - INTERNAL-API

    - name: COMPUTE-INTERFACES
      network-interfaces:
        - name: hed1
          device:
            name: hed1
          network-groups:
            - TRUNK
        - name: hed2
          device:
            name: hed2
          network-groups:
            - TRUNK
        - name: BOND0
          device:
              name: bond0
          bond-data:
              options:
                  mode: active-backup
                  miimon: 200
                  primary: hed3
              provider: linux
              devices:
                - name: hed3
                - name: hed4
          network-groups:
            - MANAGEMENT
            - EXTERNAL-API
            - INTERNAL-API

    - name: ESX-COMPUTE-INTERFACES
      network-interfaces:
        - name: eth0
          device:
              name: eth0
          forced-network-groups:
            - EXTERNAL-API
        - name: eth1
          device:
              name: eth1
          forced-network-groups:
            - MANAGEMENT
        - name: eth2
          device:
              name: eth2
          forced-network-groups:
            - INTERNAL-API
07070100000030000081A40000000000000000000000015D827255000010E8000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/network_groups.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  network-groups:

    #
    # External API
    #
    # This is the network group that users will use to
    # access the public API endpoints of your cloud
    #
    - name: EXTERNAL-API
      hostname-suffix: extapi
      component-endpoints:
        - bind-ext
      load-balancers:
        - provider: ip-cluster
          name: extlb
          # If external-name is set then public urls in keystone
          # will use this name instead of the IP address.
          # You must either set this to a name that can be resolved in your network
          # or comment out this line to use IP addresses
          # external-name:

          tls-components:
            - default
          roles:
            - public
          cert-file: my-public-entry-scale-esx-nsx-cert
          # This is the name of the certificate that will be used on load balancer.
          # Ardana will look for a file with this name in the config/tls/certs directory.
          # This is the certificate that matches your setting for external-name
          #
          # Note that it is also possible to have per service certificates:
          #
          # cert-file:
          # default: my-public-entry-scale-esx-nsx-cert
          # horizon: my-horizon-cert
          # nova-api: my-nova-cert
          #
          # The configuration-processor will also create a request templates for each
          # named certificates under
          # "info/cert_reqs/"
          #
          # And this will be of the form
          #
          # info/cert_reqs/my-public-entry-scale-esx-nsx-cert
          # info/cert_reqs/my-horizon-cert
          # info/cert_reqs/my-nova-cert
          #
          # These request templates contain the subject Alt-names that
          # the certificates need. A customer can add to this template
          # before generating their Certificate Signing Request (CSR).
          # They would then send the CSR to their CA to be signed and
          # receive the certificate, which can then be dropped into
          # "config/tls/certs".
          #
          # When you bring in your own certificate you may want to bring
          # in the trust chains (or CA certificate) for this certificate.
          # This is usually not required if the CA is a public signer that
          # gets bundled by the system. However, we suggest you include it
          # into Ardana anyway by copying the file into the directory
          # "config/cacerts/".
          # Note that the file extension should be .crt or it will not
          # be processed by Ardana.
          #

    #
    # Management
    #
    # This is the network group that will be used to for
    # management traffic within the cloud.
    #
    # The interface used by this group will be presented
    # to Neutron as physnet1, and used by tenant VLANS
    #
    - name: MANAGEMENT
      hostname-suffix: mgmt
      hostname: true
      component-endpoints:
        - lifecycle-manager
        - lifecycle-manager-target
      routes:
        - default

    ##
    ## TRUNK
    ##
    ## This is the network group that will be used for
    ## trunk network on the OVSvApp service VM.
    ## The trunk network is used  to apply security
    ## group rules on tenant traffic.
    - name: TRUNK
      hostname-suffix: trunk

    #
    # INTERNAL-API
    #
    - name: INTERNAL-API
      tls-component-endpoints:
        - barbican-api
      component-endpoints:
        - default
      load-balancers:
        - provider: ip-cluster
          name: lb
          tls-components:
            - default
          components:
            - nova-metadata
          roles:
            - internal
            - admin
          cert-file: ardana-internal-cert
07070100000031000081A40000000000000000000000015D8272550000067D000000000000000000000000000000000000006B00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/networks.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  networks:
    #
    # This example uses the following networks
    #
    # Network       CIDR             VLAN
    # -------       ----             ----
    # External API  10.0.1.0/24      101 (tagged)
    # Internal API  192.168.50.0/23  102 (tagged)
    # Management    192.168.10.0/24  100 (untagged)
    # Trunk                          untagged
    #
    # Modify these values to match your environment
    #
    - name: EXTERNAL-API-NET
      vlanid: 101
      tagged-vlan: true
      cidr: 10.0.1.0/24
      gateway-ip: 10.0.1.1
      network-group: EXTERNAL-API

    - name: MANAGEMENT-NET
      tagged-vlan: false
      vlanid: 100
      cidr: 192.168.10.0/24
      gateway-ip: 192.168.10.1
      network-group: MANAGEMENT
      addresses:
        - 192.168.10.1-192.168.10.250

    - name: TRUNK-NET
      tagged-vlan: false
      network-group: TRUNK

    - name: INTERNAL-API-NET
      vlanid: 102
      cidr: 192.168.50.0/24
      tagged-vlan: true
      network-group: INTERNAL-API
      addresses:
        - 192.168.50.4-192.168.50.250
07070100000032000081A40000000000000000000000015D82725500000B75000000000000000000000000000000000000006F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/nic_mappings.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  # nic-mappings are used to ensure that the device name used by the
  # operating system always maps to the same physical device.
  # A nic-mapping is associated to a server in the server definition.
  # The logical-name specified here can be used as a device name in
  # the network interface-models definitions.
  #
  # - name               user-defined name for each mapping
  #   physical-ports     list of ports for this mapping
  #     - logical-name   device name to be used by the operating system
  #       type           physical port type
  #       bus-address    bus address of the physical device
  #
  # Notes:
  # - The PCI bus addresses are examples. You will need to determine
  #   the values pertinent to your servers. These can be found with the
  #   the `lspci` command or from the server BIOS
  # - enclose the bus address in quotation marks so yaml does not
  #   misinterpret the embedded colon (:) characters
  # - simple-port is the only currently supported port type
  # - choosing a new device name prefix (e.g. 'eth' -> 'hed') will
  #   help prevent remapping errors

  nic-mappings:

    - name: ESXI_VMXNET3_4PORT
      physical-ports:
        - logical-name: hed1
          type: simple-port
          bus-address: "0000:06:00.0"

        - logical-name: hed2
          type: simple-port
          bus-address: "0000:07:00.0"

        - logical-name: hed3
          type: simple-port
          bus-address: "0000:08:00.0"

        - logical-name: hed4
          type: simple-port
          bus-address: "0000:09:00.0"

    - name: MY-4PORT-SERVER
      physical-ports:
        - logical-name: hed1
          type: simple-port
          bus-address: "0000:06:00.0"

        - logical-name: hed2
          type: simple-port
          bus-address: "0000:07:00.0"

        - logical-name: hed3
          type: simple-port
          bus-address: "0000:08:00.0"

        - logical-name: hed4
          type: simple-port
          bus-address: "0000:09:00.0"

    - name: ESXI-COMPUTE-3PORT
      physical-ports:
        - logical-name: eth0
          type: simple-port
          bus-address: "0000:06:00.0"
        - logical-name: eth1
          type: simple-port
          bus-address: "0000:07:00.0"
        - logical-name: eth2
          type: simple-port
          bus-address: "0000:08:00.0"
07070100000033000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000006200000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/nsx07070100000034000081A40000000000000000000000015D82725500001F3C000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/nsx/nsx_config.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2
  configuration-data:
    - name: NSXT-CONFIG-CP1
      services:
        - nsx
      data:

        # (Required) nsx_flavor.  Set to 'nsxt'
        nsx_flavor: 'nsxt'

        # (Required) URL for NSXv manager (e.g - https://management_ip).
        # The IP address of one or more NSX Managers separated by commas
        nsx_api_managers: 'https://<nsx-mgr-ip>:<port>'

        # (Required) username to login to the NSX Manager API.
        nsx_api_user: 'admin'

        # (Required) Encrypted NSX Manager API password.
        # Password encryption is done by the script
        # ~/openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
        # $ ./ardanaencrypt.py
        #
        # The script will prompt for the NSX Manager password. The string
        # generated is the encrypted password. Enter the string enclosed
        # by double-quotes below.
        nsx_api_password: "<encrypted-nsx-mgr-passwd-from-ardanaencrypt>"

        # (Required) default_overlay_tz_uuid:
        # UUID of the default NSX overlay transport zone that will be used
        # for creating tunneled isolated Neutron networks. If no physical
        # network is specifed when creating a logical network, this transport
        # zone will be used by default.
        default_overlay_tz_uuid: '<a-uuid>'

        # (Optional) dns_domain
        # Domain to use for building the hostnames.
        # dns_domain: 'domain'

        # (Optional) default_vlan_tz_uuid
        # Only required when creating VLAN or flat provider networks. UUID of default
        # NSX VLAN transport zone that will be used for bridging between Neutron networks,
        # if no physical network has been specified.
        # default_vlan_tz_uuid: '<a-uuid>'

        # (Optional) default_edge_cluster_uuid:
        # Default Edge Cluster Identifier
        # default_edge_cluster_uuid: '<a-uuid>'

        # (Optional) retries
        # Maximum number of times to retry API requests upon stale
        # revision errors.
        # retries: 3

        # (Optional) insecure:
        # If true (default), the NSXv server certificate is not verified.
        # If false, then the default CA truststore is used for verification.
        # This option is ignored if "ca_file" is set
        # (Required) datacenter id for edge deployment.
        # insecure: True

        # (Optional) ca_file: Name of the certificate file. If insecure is set to True,
        # then this parameter is ignored. If insecure is set to False and this
        # parameter is not defined, then the system root CAs will be used
        # to verify the server certificate.
        # ca_file: a/nsx/certificate/file

        # (Optional) http_timeout:
        # Seconds before aborting a HTTP connection to a NSX manager.
        # http_timeout: 60

        # (Optional) http_read_timeout:
        # Seconds before aborting a HTTP read response from a NSX Manager
        # http_read_timeout: 60

        # (Optional) http_retries
        # Maximum number of times to retry a HTTP connection
        # http_retries: 5

        # (Optional) Maximum number of connections to each NSX manager
        # concurrent_connections: 10

        # (Optional) conn_idle_timeout
        # Seconds to wait before ensuring connectivity to the NSX manager
        # if no manager connection has been used
        # conn_idle_timeout: 120

        # (Optional) default_tier0_router_uuid
        # UUID of the default tier0 router that will be used for connecting
        # to tier1 logical routers and configuring external networks.
        # default_tier0_router_uuid: '<a-uuid>'

        # (Optional) default_bridge_cluster_uuid
        # UUID of the default NSX bridge cluster that will be used to perform
        # L2 gateway bridging between VXLAN and VLAN networks.  If not
        # specified, the admin will # have to create a L2 gateway
        # corresponding to a NSX bridge cluster using L2 gateway # API. This
        # field must be specified on one of the active Neutron servers only.
        # default_bridge_cluster_uuid: '<a-uuid>'

        # (Optional) number_of_nested_groups
        # The number of nested groups which are used by the plugin, each
        # neutron security-group is added to one nested group and each nested
        # group can contain a maximum of 500 security-groups. Therefore, the
        # maximum of security groups that can be created is 500 *
        # number_of_nested_groups.  The defult is 8 nested groups, which
        # allows a maximum of 4000 security-groups
        # number_of_nested_groups: 8

        # (Optional) metadata_mode
        # Acceptable values are
        # access_network : enables a dedicated connection to the metadata
        # proxy for metadata server access via Neutron router.
        # dhcp_host_route : enables host route injection via the dhcp agent.
        # This option is only useful if running on a host that does not
        # support namespaces otherwise access_network should be used.
        # metadata_mode: '<metadata-mode>'

        # (Optional) metadata_on_demand
        # If True, an internal metadata network is created for a router
        # only when the router is attached to a DHCP-disabled subnet.
        # metadata_on_demand: 'False'

        # (Optional) native_dhcp_metadata
        # If true, DHCP and metadata proxy services will be provided by NSX.
        # native_dhcp_metadata: True

        # Note: uncomment dhcp_profile_uuid and metadata_proxy_uuid
        # if native_dhcp_metadata is True

        # (Optional) metadata_proxy_uuid
        # The UUID of the NSX Metadata Proxy that is used to enable native
        # metadata service. It needs to be created in NSX before starting
        # Neutron with the NSX plugin. (Uncomment if native_dhcp_metadata is True)
        # metadata_proxy_uuid: '<metadata-proxy-uuid-from-nsx-manager>'

        # (Optional) dhcp_profile_uuid
        # The UUID of the NSX DHCP Profile that is used to enable native
        # DHCP service. It needs to be created in NSX before starting
        # Neutron with the NSX plugin (Uncomment if native_dhcp_metadata is True)
        # dhcp_profile_uuid: '<dhcp-profile-uuid-from-nsx-mgr>'

        # (Optional) dhcp_lease_time
        # The amount of seconds an IP address assigned by NSX's dhcp server will
        # be valid. Default value is 86400.
        # dhcp_lease_time:  86400

        # (Optional) dhcp_relay_service
        # Name or UUID of the NSX dhcp relay service that will be used to
        # enable DHCP relay on router ports
        # dhcp_relay_service: '<dhcp-relay-service-uuid>'

        # (Optional): locking_coordinator_url
        # URL for distributed locking coordination resource for lock manager.
        # This value is passed as a parameter to tooze coordinator. By
        # default, value is None and oslo_concurrentcy is used for single-
        # node lock management.  Default is None.
        # locking_coordinator_url: None
        #
        # (Optional): qos_peak_bw_multiplier
        # The QoS rules peak bandwidth value will be the configured maximum
        # bandwidth of the QoS rule, multiplied by this value. Value must be
        # bigger than 1. Default is 2.
        # qos_peak_bw_multiplier: 2
07070100000035000081A40000000000000000000000015D82725500000B2F000000000000000000000000000000000000006F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/pass_through.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
  version: 2
pass-through:
  global:
    vmware:
      - username: <vcenter-admin-username>
        ip: <vcenter-ip>
        port: 443
        cert_check: false
        # The password needs to be encrypted using the script
        # openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
        # $ ./ardanaencrypt.py
        #
        # The script will prompt for the vCenter password. The string
        # generated is the encrypted password. Enter the string
        # enclosed by double-quotes below.
        password: "<encrypted-passwd-from-ardanaencrypt>"

        # The id is is obtained by the URL
        # https://<vcenter-ip>/mob/?moid=ServiceInstance&doPath=content%2eabout,
        # field instanceUUID.
        id: <vcenter-uuid>
  servers:
    -
      # Here the 'id' refers to the name of the node running the
      # esx-compute-proxy. This is identical to the 'servers.id' in
      # servers.yml. There should be one esx-compute-proxy node per ESX
      # resource pool.
      id: esx-compute1
      data:
        vmware:
          vcenter_cluster: <vmware cluster1 name>
          vcenter_id: <vcenter-uuid>
    -
      id: esx-compute2
      data:
        vmware:
          vcenter_cluster: <vmware cluster2 name>
          vcenter_id: <vcenter-uuid>

    -
      # In case of an NSX-T deployment, specefic parameters need to be
      # speciefied to register a KVM compute with the manager. As before, 
      # 'id' refers to the 'servers.id' in servers.yml.
      id: compute1
      data:
        vmware_nsxt:

          # These are the credentials that the manager will use to access
          # the host, install nsxt specific host packages and configure them.
          username: <host username>
          password: <host encrypted-passwd-from-ardanaencrypt>

          # The transport node will be configured with the host switch spec and
          # transport zone endpoints copied from this profile. Note that
          # included host switch profiles of type NiocProfile will be filtered
          # out and ignored as they only apply to ESX hosts.
          transport_node_profile_id: <transport node profile uuid>


07070100000036000081A40000000000000000000000015D82725500000961000000000000000000000000000000000000007000000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/server_groups.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  server-groups:

    #
    # Server Groups provide a mechanism for organizing servers
    # into a hierarchy that reflected the physical topology.
    #
    # When allocating a server the configuration processor
    # will search down the hierarchy from the list of server
    # groups identified as the failure-zones for the control
    # plane until it finds an available server of the requested
    # role.   If the allocation policy is "strict" servers are
    # allocated from different failure-zones.
    #
    # When determining which network from a network group to
    # associate with a server the configuration processor will
    # search up the hierarchy from the server group containing the
    # server until it finds a network in the required network
    # group.
    #

    #
    # In this example there is only one network in each network
    # group and so we put all networks in the top level server
    # group.   Below this we create server groups for three
    # failure zones, within which servers are grouped by racks.
    #
    # Note: the association of servers to server groups is part
    # of the server definition (servers.yml)
    #

    #
    # At the top of the tree we have a server groups for
    # networks that can reach all servers
    #
    - name: CLOUD
      server-groups:
        - AZ1
        - AZ2
        - AZ3
      networks:
        - EXTERNAL-API-NET
        - MANAGEMENT-NET
        - INTERNAL-API-NET
        - TRUNK-NET

    #
    # Create a group for each failure zone
    #
    - name: AZ1
      server-groups:
        - RACK1

    - name: AZ2
      server-groups:
        - RACK2

    - name: AZ3
      server-groups:
        - RACK3

    #
    # Create a group for each rack
    #
    - name: RACK1

    - name: RACK2

    - name: RACK3
07070100000037000081A40000000000000000000000015D827255000003BA000000000000000000000000000000000000006F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/server_roles.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  server-roles:

    - name: CONTROLLER-ROLE
      interface-model: CONTROLLER-INTERFACES
      disk-model: CONTROLLER-1TB-DISKS

    - name: COMPUTE-ROLE
      interface-model: COMPUTE-INTERFACES
      disk-model: COMPUTE-NODE-DISKS

    - name: ESX-COMPUTE-ROLE
      interface-model: ESX-COMPUTE-INTERFACES
      disk-model: COMPUTE-NODE-DISKS
07070100000038000081A40000000000000000000000015D82725500000A49000000000000000000000000000000000000006A00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/servers.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  baremetal:
    # NOTE: These values need to be changed to match your environment.
    # Define the network range that contains the ip-addr values for
    # the individual servers listed below.
    subnet: 192.168.10.0
    netmask: 255.255.255.0

  servers:
    # NOTE: Addresses of servers need to be
    #       changed to match your environment.
    #
    #       Add additional servers as required

    # Controllers
    - id: controller1
      ip-addr: 192.168.10.3
      role: CONTROLLER-ROLE
      server-group: RACK1
      nic-mapping: ESXI_VMXNET3_4PORT
      mac-addr: "b2:72:8d:ac:7c:6f"
      ilo-ip: 192.168.9.3
      ilo-password: password
      ilo-user: admin

    - id: controller2
      ip-addr: 192.168.10.4
      role: CONTROLLER-ROLE
      server-group: RACK2
      nic-mapping: ESXI_VMXNET3_4PORT
      mac-addr: "8a:8e:64:55:43:76"
      ilo-ip: 192.168.9.4
      ilo-password: password
      ilo-user: admin

    - id: controller3
      ip-addr: 192.168.10.5
      role: CONTROLLER-ROLE
      server-group: RACK3
      nic-mapping: ESXI_VMXNET3_4PORT
      mac-addr: "26:67:3e:49:5a:a7"
      ilo-ip: 192.168.9.5
      ilo-password: password
      ilo-user: admin

    # Compute Nodes
    - id: compute1
      server-group: RACK1
      nic-mapping: MY-4PORT-SERVER
      ip-addr: 192.168.10.6
      mac-addr: "00:de:ad:be:ef:10"
      role: COMPUTE-ROLE
      ilo-ip: 1.1.1.10
      ilo-user: dummy-user
      ilo-password: dummy-password

    # Nova Compute proxy node
    - id: esx-compute1
      server-group: RACK1
      nic-mapping: ESXI-COMPUTE-3PORT
      ip-addr: 192.168.10.7
      mac-addr: "00:de:ad:be:ef:11"
      role: ESX-COMPUTE-ROLE
      ilo-ip: 1.1.1.11
      ilo-user: dummy-user
      ilo-password: dummy-password

    - id: esx-compute2
      server-group: RACK1
      nic-mapping: ESXI-COMPUTE-3PORT
      ip-addr: 192.168.10.8
      mac-addr: "00:de:ad:be:ef:12"
      role: ESX-COMPUTE-ROLE
      ilo-ip: 1.1.1.12
      ilo-user: dummy-user
      ilo-password: dummy-password
07070100000039000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000006400000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/swift0707010000003A000081A40000000000000000000000015D8272550000060A000000000000000000000000000000000000007500000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxt/data/swift/swift_config.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
  version: 2

configuration-data:
  - name: SWIFT-CONFIG-CP1
    services:
      - swift
    data:
      control_plane_rings:
        swift-zones:
          - id: 1
            server-groups:
              - AZ1
          - id: 2
            server-groups:
              - AZ2
          - id: 3
            server-groups:
              - AZ3
        rings:
          - name: account
            display-name: Account Ring
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3

          - name: container
            display-name: Container Ring
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3

          - name: object-0
            display-name: General
            default: yes
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3
0707010000003B000041ED0000000000000000000000035D82725500000000000000000000000000000000000000000000005900000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv0707010000003C000081A40000000000000000000000015D827255000013C6000000000000000000000000000000000000006300000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/README.md
(c) Copyright 2017 SUSE LLC

Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.


## Ardana Single region Entry Scale Cloud with ESX+NSX Example ##

The input files in this example deploy a cloud with ESX hypervisor that uses
NSX-V networking that has the following characteristics:


### Compute Proxy Nodes ###

- One single server that runs the Nova ESX compute proxy. There should be one
  node per ESX resource pool. The proxy nodes can be ESX virtual machines.
  When running as VMs, they should be in a HA cluster. Do not image the VMs
  serving as compute proxy nodes. Use the nodelist option with bm-reimage.yml
  playbook to avoid imaging them.


### Control Planes ###

- A single control plane consisting of three servers that co-host all of the
  other required openstack services.


### Deployer Node ###

This configuration runs the lifecycle-manager (formerly referred to as the
deployer) on a control plane node.  You need to include this node address
in your servers.yml definition. This function does not need a dedicated
network.

The minimum server count for this example is therefore 4 servers (Control
Plane (x3) for Openstack services + 1 activated vCenter cluster having at
least 1 host, for vCenter appliance, NSX Manager, and ESX compute proxy
VMs).


An example set of servers are defined in ***data/servers.yml***.   You will
need to modify this file to reflect your specific environment.


### Networking ###

The example requires the following networks:

IPMI/iLO network, connected to the deployer and the IPMI/iLO ports of all
servers

A pair of bonded NICs which are used by the following networks:

- EXTERNAL-API - This is the network that users will use to make requests to
                 the cloud
- INTERNAL-API - This is the network that will be used to access the
                 ESX metadata proxy servers
- MANAGEMENT - This is the network that will be used for all internal traffic
               between the cloud services and traffic between VMs on private
               networks within the cloud

The Data Center Management network (which hosts the vCenter server) must be
reachable from the Cloud Management network so that the controllers, compute
proxy nodes can communicate to the vCenter server.

An example set of networks are defined in ***data/networks.yml***.    You will
need to modify this file to reflect your environment.

The example uses the devices hed3 & hed4 as a bonded network for all services.
If you need to modify these for your environment they are defined in
***data/net_interfaces.yml***. The network devices eth3 & eth4 are renamed to
devices hed3 & hed4 using the PCI bus mappings specified in
 ***data/nic_mappings.yml***. You may need to modify the PCI bus addresses to
match your system.

###Adapting the entry-scale model to fit your environment###

The minimum set of changes you need to make to adapt the model for your
environment are:

- Update servers.yml to list the details of your bare metal servers (i.e,
  ILO access info). You need to perform this step if you are using the Ardana
  supplied Cobber playbooks to install Linux on your servers.

- Update the networks.yml file to replace network CIDRs and VLANs with site
  specific values

- Update the nic_mappings.yml file to ensure that network devices are mapped
  to the correct physical port(s)

- Review the disk models (disks_*.yml) and confirm that the associated servers
  have the number of disks required by the disk model. The device names in the
  disk models might need to be adjusted to match the probe order of your servers.

Disk models are provided as follows:
    - DISK SET CONTROLLER: Minimum 1 disk
    - DISK SET COMPUTE NODE DISKS: This is the disks used on the ESX compute proxy
     nodes. Each node is a ESX VM.  ESX VM is expected to create 1 virtual
     disk for each VM.

- Update the net interfaces.yml file to match the server NICs used in your
  configuration. This file has a separate interface model definition for
  each of the following:
    - INTERFACE SET CONTROLLER
    - INTERFACE SET ESX-COMPUTE

*DISK_SET used by Nova compute proxy is not recommanded to modify by user*


##The NSX Configuration Data##

The NSX Configuration data file data/nsx/nsx_config.yml contains the
information on the NSX installation needed to configure neutron to use the
NSXV core-plugin. See the comments for the parameters' descriptions.


##The pass_through.yml File##

The ESX compute proxy needs to have the information in pass_through.yml in
order to configure itself. See the comments for the parameters'
descriptions.
0707010000003D000081A40000000000000000000000015D82725500000994000000000000000000000000000000000000006900000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/cloudConfig.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  cloud:
    name: entry-scale-nsx

    # The following values are used when
    # building hostnames
    hostname-data:
        host-prefix: ardana
        member-prefix: -m

    # List of ntp servers for your site
    ntp-servers:
    #    - "ntp-server1"
    #    - "ntp-server2"

    # dns resolving configuration for your site
    # refer to resolv.conf for details on each option
    dns-settings:
    #  nameservers:
    #    - name-server1
    #    - name-server2
    #    - name-server3
    #
    #  domain: sub1.example.net
    #
    #  search:
    #    - sub1.example.net
    #    - sub2.example.net
    #
    #  sortlist:
    #    - 192.168.160.0/255.255.240.0
    #    - 192.168.0.0
    #
    #  # option flags are '<name>:' to enable, remove to unset
    #  # options with values are '<name>:<value>' to set
    #
    #  options:
    #    debug:
    #    ndots: 2
    #    timeout: 30
    #    attempts: 5
    #    rotate:
    #    no-check-names:
    #    inet6:
    smtp-settings:
    #  server: mailserver.examplecloud.com
    #  port: 25
    #  timeout: 15
    # These are only needed if your server requires authentication
    #  user:
    #  password:

    # Generate firewall rules
    firewall-settings:
        enable: true
        # log dropped packets
        logging: true

    # Disc space needs to be allocated to the audit directory before enabling
    # auditing.
    # Default can be either "disabled" or "enabled". Services listed in
    # "enabled-services" and "disabled-services" override the default setting.
    audit-settings:
       audit-dir: /var/audit
       default: disabled
       #enabled-services:
       #  - keystone
       #  - barbican
       disabled-services:
         - nova
         - barbican
         - keystone
         - cinder
         - ceilometer
         - neutron
         - swift
0707010000003E000041ED0000000000000000000000055D82725500000000000000000000000000000000000000000000005E00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data0707010000003F000081A40000000000000000000000015D82725500000D58000000000000000000000000000000000000007000000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/control_plane.yml#
# (c) Copyright 2017-2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  control-planes:
    - name: control-plane-1
      control-plane-prefix: cp1
      region-name: region1
      failure-zones:
        - AZ1
        - AZ2
        - AZ3
      configuration-data:
        - DESIGNATE-CONFIG-CP1
        - NSXV-CONFIG-CP1
        - SWIFT-CONFIG-CP1
      common-service-components:
        - logging-rotate
        - logging-producer
        - monasca-agent
        - stunnel
        - lifecycle-manager-target
      clusters:
        - name: cluster1
          cluster-prefix: c1
          server-role: CONTROLLER-ROLE
          member-count: 3
          allocation-policy: strict
          service-components:
            - lifecycle-manager
            - tempest
            - ntp-server
            - swift-ring-builder
            - mysql
            - ip-cluster
            - apache2
            - keystone-api
            - keystone-client
            - rabbitmq
            - glance-api
            - glance-registry
            - glance-client
            - cinder-api
            - cinder-scheduler
            - cinder-volume
            - cinder-backup
            - cinder-client
            - nova-api
            - nova-placement-api
            - nova-scheduler
            - nova-conductor
            - nova-novncproxy
            - nova-client
            - neutron-server
            - vmware-nsxv
            - neutron-client
            - horizon
            - swift-proxy
            - memcached
            - swift-account
            - swift-container
            - swift-object
            - swift-client
            - heat-api
            - heat-api-cfn
            - heat-engine
            - heat-client
            - openstack-client
            - ceilometer-polling
            - ceilometer-agent-notification
            - ceilometer-common
            - ceilometer-client
            - zookeeper
            - kafka
            - spark
            - cassandra
            - storm
            - monasca-api
            - monasca-persister
            - monasca-notifier
            - monasca-threshold
            - monasca-client
            - monasca-transform
            - logging-server
            - ops-console-web
            - barbican-api
            - barbican-client
            - barbican-worker
            - designate-api
            - designate-central
            - designate-producer
            - designate-worker
            - designate-mdns
            - designate-client
            - bind
            - magnum-api
            - magnum-conductor

      resources:
        - name: esx-compute
          resource-prefix: esx-comp
          server-role: ESX-COMPUTE-ROLE
          allocation-policy: any
          service-components:
            - nova-esx-compute-proxy
            - nova-compute
            - ntp-client
07070100000040000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000006800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/designate07070100000041000081A40000000000000000000000015D82725500000379000000000000000000000000000000000000007D00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/designate/designate_config.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  configuration-data:
    - name: DESIGNATE-CONFIG-CP1
      services:
        - designate
      data:
        dns_domain: example.org.
        ns_records:
          - hostname: ns1.example.org.
            priority: 1
          - hostname: ns2.example.org.
            priority: 2
07070100000042000081A40000000000000000000000015D8272550000049D000000000000000000000000000000000000007500000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/disks_compute_node.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  disk-models:
  - name: COMPUTE-NODE-DISKS
    # Disk model to be used for compute nodes
    # /dev/sda_root is used as a volume group for /, /var/log and /var/crash
    # Additional disks can be added to either volume group
    volume-groups:
      - name: cpn-vg
        physical-volumes:
         - /dev/sda_root
        logical-volumes:
          - name: root
            size: 80%
            fstype: ext4
            mount: /
          - name: LV_CRASH
            size: 15%
            mount: /var/crash
            fstype: ext4
            mkfs-opts: -O large_file
07070100000043000081A40000000000000000000000015D82725500001968000000000000000000000000000000000000007700000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/disks_controller_1TB.yml#
# (c) Copyright 2017-2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  disk-models:
  - name: CONTROLLER-1TB-DISKS

    # This example is based on using a single 1TB disk for a volume
    # group that contains all file systems on a controller with 64GB
    # of memory.
    #
    # Additional disks can be added to the 'physical-volumes' section.
    #
    #

    volume-groups:
      - name: ctlr-vg
        physical-volumes:

          # NOTE: 'sda_root' is a templated value. This value is checked in
          # os-config and replaced by the partition actually used on sda
          #e.g. sda1 or sda5
          - /dev/sda_root

          # Add any additional disks for the volume group here
          # -/dev/sdx
          # -/dev/sdy

        logical-volumes:
          # The policy is not to consume 100% of the space of each volume group.
          # At least 5% should be left free for snapshots. This example leaves 18%
          # free to allow for some flexibility.

          - name: root
            size: 6%
            fstype: ext4
            mount: /

          # Reserved space for kernel crash dumps
          # Should evaluate to a value that is slightly larger than
          # the memory size of your server
          - name: crash
            size: 6%
            mount: /var/crash
            fstype: ext4
            mkfs-opts: -O large_file

          # Local Log files.  Depending on your retention policy
          # log files can require significant disc space
          - name: log
            size: 16%
            mount: /var/log
            fstype: ext4
            mkfs-opts: -O large_file

          # Mysql Database.  All persistent state from OpenStack services
          # is saved here.  Although the individual objects are small the
          # accumulated data can grow over time
          - name: mysql
            size: 6%
            mount: /var/lib/mysql
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: mysql

          # Rabbitmq works mostly in memory, but needs to be able to persist
          # messages to disc under high load. This area should evaluate to a value
          # that is slightly larger than the memory size of your server
          - name: rabbitmq
            size: 7%
            mount: /var/lib/rabbitmq
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: rabbitmq
              rabbitmq_env: home

          # Database storage for event monitoring and metering data (Monasca).
          - name: cassandra_db
            size: 19%
            mount: /var/cassandra/data
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: cassandra

          - name: cassandra_log
            size: 1%
            mount: /var/cassandra/commitlog
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: cassandra

          # Messaging system for monitoring and logging.
          - name: kafka
            size: 7%
            mount: /var/kafka
            fstype: ext4
            mkfs-opts: -O large_file
            consumer:
              name: kafka

          # Data storage for centralized logging. This holds log entries from all
          # servers in the cloud and hence can require a lot of disk space.
          - name: elasticsearch
            size: 13%
            mount: /var/lib/elasticsearch
            fstype: ext4

          # Zookeeper is used to provide cluster co-ordination in the monitoring
          # system.  Although not a high user of disc space we have seen issues
          # with zookeeper snapshots filling up filesystems so we keep it in its
          # own space for stability.
          - name: zookeeper
            size: 1%
            mount: /var/lib/zookeeper
            fstype: ext4

        consumer:
           name: os

    # Cinder: cinder volume needs temporary local filesystem space to convert
    # images to raw when creating bootable volumes. Using a separate volume
    # will both ringfence this space and avoid filling /
    # The size should represent the raw size of the largest image times
    # the number of concurrent bootable volume creations.
    # The logical volume can be part of an existing volume group or a
    # dedicated volume group.
    #  - name: cinder-vg
    #    physical-volumes:
    #      - /dev/sdx
    #    logical-volumes:
    #     - name: cinder_image
    #       size: 5%
    #       mount: /var/lib/cinder
    #       fstype: ext4

    #  Glance cache: if a logical volume with consumer usage 'glance-cache'
    #  is defined Glance caching will be enabled. The logical volume can be
    #  part of an existing volume group or a dedicated volume group.
    #  - name: glance-vg
    #    physical-volumes:
    #      - /dev/sdx
    #    logical-volumes:
    #     - name: glance-cache
    #       size: 95%
    #       mount: /var/lib/glance/cache
    #       fstype: ext4
    #       mkfs-opts: -O large_file
    #       consumer:
    #         name: glance-api
    #         usage: glance-cache

    # Audit: Audit logs can consume significant disc space.  If you
    # are enabling audit then it is recommended that you use a dedicated
    # disc.
    #  - name: audit-vg
    #    physical-volumes:
    #      - /dev/sdz
    #    logical-volumes:
    #      - name: audit
    #        size: 95%
    #        mount: /var/audit
    #        fstype: ext4
    #        mkfs-opts: -O large_file

    # Additional disk group defined for Swift
    device-groups:
      - name: swiftobj
        devices:
          - name: /dev/sdb
          - name: /dev/sdc
          # Add any additional disks for swift here
          # -name: /dev/sdd
          # -name: /dev/sde
        consumer:
          name: swift
          attrs:
            rings:
              - account
              - container
              - object-0
07070100000044000081A40000000000000000000000015D827255000006E3000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/firewall_rules.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

#
# Ardana will create firewall rules to enable the required access for
# all of the deployed services. Use this section to define any
# additional access.
#
# Each group of rules can be applied to one or more network groups
# Examples are given for ping and ssh
#
# Names of rules, (e.g. "PING") are arbitrary and have no special significance
#

  firewall-rules:

    - name: SSH
      # network-groups is a list of all the network group names
      # that the rules apply to
      network-groups:
      - MANAGEMENT
      - INTERNAL-API
      rules:
      - type: allow
        # range of remote addresses in CIDR format that this
        # rule applies to
        remote-ip-prefix:  0.0.0.0/0
        port-range-min: 22
        port-range-max: 22
        # protocol must be one of: null, tcp, udp or icmp
        protocol: tcp

    - name: PING
      network-groups:
      - MANAGEMENT
      - EXTERNAL-API
      - INTERNAL-API
      rules:
      # open ICMP echo request (ping)
      - type: allow
        remote-ip-prefix:  0.0.0.0/0
        # icmp type
        port-range-min: 8
        # icmp code
        port-range-max: 0
        protocol: icmp

07070100000045000081A40000000000000000000000015D82725500000661000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/net_interfaces.yml#
# (c) Copyright 2017-2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  interface-models:
      # These examples uses hed3 and hed4 as a bonded
      # pair for all networks on all three server roles
      #
      # Edit the device names and bond options
      # to match your environment
      #
    - name: CONTROLLER-INTERFACES
      network-interfaces:
        - name: BOND0
          device:
              name: bond0
          bond-data:
              options:
                  mode: active-backup
                  miimon: 200
                  primary: hed3
              provider: linux
              devices:
                - name: hed3
                - name: hed4
          network-groups:
            - MANAGEMENT
            - EXTERNAL-API
            - INTERNAL-API

    - name: ESX-COMPUTE-INTERFACES
      network-interfaces:
        - name: eth0
          device:
              name: eth0
          forced-network-groups:
            - MANAGEMENT
        - name: eth1
          device:
              name: eth1
          forced-network-groups:
            - INTERNAL-API
07070100000046000081A40000000000000000000000015D827255000010EF000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/network_groups.yml#
# (c) Copyright 2017-2018 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  network-groups:

    #
    # External API
    #
    # This is the network group that users will use to
    # access the public API endpoints of your cloud
    #
    - name: EXTERNAL-API
      hostname-suffix: extapi
      component-endpoints:
        - bind-ext
      load-balancers:
        - provider: ip-cluster
          name: extlb
          # If external-name is set then public urls in keystone
          # will use this name instead of the IP address.
          # You must either set this to a name that can be resolved in your network
          # or comment out this line to use IP addresses
          # external-name:

          tls-components:
            - default
          roles:
            - public
          cert-file: my-public-entry-scale-esx-nsx-cert
          # This is the name of the certificate that will be used on load balancer.
          # Ardana will look for a file with this name in the config/tls/certs directory.
          # This is the certificate that matches your setting for external-name
          #
          # Note that it is also possible to have per service certificates:
          #
          # cert-file:
          # default: my-public-entry-scale-esx-nsx-cert
          # horizon: my-horizon-cert
          # nova-api: my-nova-cert
          #
          # The configuration-processor will also create a request templates for each
          # named certificates under
          # "info/cert_reqs/"
          #
          # And this will be of the form
          #
          # info/cert_reqs/my-public-entry-scale-esx-nsx-cert
          # info/cert_reqs/my-horizon-cert
          # info/cert_reqs/my-nova-cert
          #
          # These request templates contain the subject Alt-names that
          # the certificates need. A customer can add to this template
          # before generating their Certificate Signing Request (CSR).
          # They would then send the CSR to their CA to be signed and
          # receive the certificate, which can then be dropped into
          # "config/tls/certs".
          #
          # When you bring in your own certificate you may want to bring
          # in the trust chains (or CA certificate) for this certificate.
          # This is usually not required if the CA is a public signer that
          # gets bundled by the system. However, we suggest you include it
          # into Ardana anyway by copying the file into the directory
          # "config/cacerts/".
          # Note that the file extension should be .crt or it will not
          # be processed by Ardana.
          #

    #
    # Management
    #
    # This is the network group that will be used to for
    # management traffic within the cloud.
    #
    # The interface used by this group will be presented
    # to Neutron as physnet1, and used by tenant VLANS
    #
    - name: MANAGEMENT
      hostname-suffix: mgmt
      hostname: true
      component-endpoints:
        - lifecycle-manager
        - lifecycle-manager-target
      routes:
        - default

    ##
    ## TRUNK
    ##
    ## This is the network group that will be used for
    ## trunk network on the OVSvApp service VM.
    ## The trunk network is used  to apply security
    ## group rules on tenant traffic.
    #- name: TRUNK
    #  hostname-suffix: trunk

    #
    # INTERNAL-API
    #
    - name: INTERNAL-API
      tls-component-endpoints:
        - barbican-api
      component-endpoints:
        - default
      load-balancers:
        - provider: ip-cluster
          name: lb
          tls-components:
            - default
          components:
            - nova-metadata
          roles:
            - internal
            - admin
          cert-file: ardana-internal-cert
07070100000047000081A40000000000000000000000015D82725500000680000000000000000000000000000000000000006B00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/networks.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  networks:
    #
    # This example uses the following networks
    #
    # Network       CIDR             VLAN
    # -------       ----             ----
    # External API  10.0.1.0/24      101 (tagged)
    # Internal API  192.168.50.0/23  102 (tagged)
    # Management    192.168.10.0/24  100 (untagged)
    # Trunk                          untagged
    #
    # Modify these values to match your environment
    #
    - name: EXTERNAL-API-NET
      vlanid: 101
      tagged-vlan: true
      cidr: 10.0.1.0/24
      gateway-ip: 10.0.1.1
      network-group: EXTERNAL-API

    - name: MANAGEMENT-NET
      tagged-vlan: false
      vlanid: 100
      cidr: 192.168.10.0/24
      gateway-ip: 192.168.10.1
      network-group: MANAGEMENT
      addresses:
        - 192.168.10.1-192.168.10.250

#    - name: TRUNK-NET
#      tagged-vlan: false
#      network-group: TRUNK

    - name: INTERNAL-API-NET
      vlanid: 102
      cidr: 192.168.50.0/24
      tagged-vlan: true
      network-group: INTERNAL-API
      addresses:
        - 192.168.50.4-192.168.50.250
07070100000048000081A40000000000000000000000015D82725500000AB5000000000000000000000000000000000000006F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/nic_mappings.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  # nic-mappings are used to ensure that the device name used by the
  # operating system always maps to the same physical device.
  # A nic-mapping is associated to a server in the server definition.
  # The logical-name specified here can be used as a device name in
  # the network interface-models definitions.
  #
  # - name               user-defined name for each mapping
  #   physical-ports     list of ports for this mapping
  #     - logical-name   device name to be used by the operating system
  #       type           physical port type
  #       bus-address    bus address of the physical device
  #
  # Notes:
  # - The PCI bus addresses are examples. You will need to determine
  #   the values pertinent to your servers. These can be found with the
  #   the `lspci` command or from the server BIOS
  # - enclose the bus address in quotation marks so yaml does not
  #   misinterpret the embedded colon (:) characters
  # - simple-port is the only currently supported port type
  # - choosing a new device name prefix (e.g. 'eth' -> 'hed') will
  #   help prevent remapping errors

  nic-mappings:

    - name: ESXI_VMXNET3_4PORT
      physical-ports:
        - logical-name: hed1
          type: simple-port
          bus-address: "0000:06:00.0"

        - logical-name: hed2
          type: simple-port
          bus-address: "0000:07:00.0"

        - logical-name: hed3
          type: simple-port
          bus-address: "0000:08:00.0"

        - logical-name: hed4
          type: simple-port
          bus-address: "0000:09:00.0"

    - name: MY-2PORT-SERVER
      physical-ports:
        - logical-name: hed3
          type: simple-port
          bus-address: "0000:08:00.0"

        - logical-name: hed4
          type: simple-port
          bus-address: "0000:09:00.0"

    - name: ESXI-COMPUTE-3PORT
      physical-ports:
        - logical-name: eth0
          type: simple-port
          bus-address: "0000:06:00.0"
        - logical-name: eth1
          type: simple-port
          bus-address: "0000:07:00.0"
        - logical-name: eth2
          type: simple-port
          bus-address: "0000:08:00.0"
07070100000049000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000006200000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/nsx0707010000004A000081A40000000000000000000000015D827255000016AD000000000000000000000000000000000000007100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/nsx/nsx_config.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2
  configuration-data:
    - name: NSXV-CONFIG-CP1
      services:
        - nsx
      data:

        # (Required) URL for NSXv manager (e.g - https://management_ip).
        manager_uri: 'https://<nsx-mgr-ip>'

        # (Required) NSXv username.
        user: 'admin'

        # (Required) Encrypted NSX Manager password.
        # Password encryption is done by the script
        # ~/openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
        # $ ./ardanaencrypt.py
        #
        # The script will prompt for the NSX Manager password. The string
        # generated is the encrypted password. Enter the string enclosed
        # by double-quotes below.

        password: "<encrypted-nsx-mgr-passwd-from-ardanaencrypt>"

        # (Required) datacenter id for edge deployment.
        # Retrieved using
        #    http://<vCenter-ip-addr>/mob/?moid=ServiceInstance&doPath=content
        # click on the value from the rootFolder property. The datacenter_moid is
        # the value of the childEntity property.
        # The vCenter-ip-address comes from the file pass_through.yml in the
        # input model under "pass-through.global.vmware.ip".
        datacenter_moid: 'datacenter-21'

        # (Required) id of logic switch for physical network connectivity.
        # How to retrieve
        # 1. Get to the same page where the datacenter_moid is found.
        # 2. Click on the value of the rootFolder property.
        # 3. Click on the value of the childEntity property
        # 4. Look at the network property. The external network is
        #    network associated with EXTERNAL VM in VCenter.
        external_network: 'dvportgroup-74'

        # (Required) clusters ids containing OpenStack hosts.
        # Retrieved using http://<vcenter-ip-addr>/mob, click on the value
        # from the rootFolder property. Then click on the value of the
        # hostFolder property. Cluster_moids are the values under childEntity
        # property of the compute clusters.
        cluster_moid: 'domain-c33,domain-c35'

        # (Required) resource-pool id for edge deployment.
        resource_pool_id: 'resgroup-67'

        # (Optional) datastore id for edge deployment. If not needed,
        # do not declare it.
        # datastore_id: 'datastore-117'

        # (Required) network scope id of the transport zone.
        # To get the vdn_scope_id, in the vSphere web client from the Home
        # menu:
        #   1. click on Networking & Security
        #   2. click on installation
        #   3. click on the Logical Netowrk Preparation tab.
        #   4. click on the Transport Zones button.
        #   5. Double click on the transport zone being configure.
        #   6. Select Manage tab.
        #   7. The vdn_scope_id will appear at the end of the URL.
        vdn_scope_id: 'vdnscope-1'

        # (Optional) Dvs id for VLAN based networks. If not needed,
        # do not declare it.
        # dvs_id: 'dvs-68'

        # (Required) backup_edge_pool: backup edge pools management range,
        # - edge_type>[edge_size]:<minimum_pooled_edges>:<maximum_pooled_edges>
        # - edge_type: service (service edge) or  vdr (distributed edge)
        # - edge_size:  compact ,  large (by default),  xlarge  or  quadlarge
        backup_edge_pool: 'service:compact:4:10,vdr:compact:4:10'

        # (Optional) mgt_net_proxy_ips: management network IP address for
        # metadata proxy. If not needed, do not declare it.
        # mgt_net_proxy_ips: '10.142.14.251,10.142.14.252'

        # (Optional) mgt_net_proxy_netmask: management network netmask for
        # metadata proxy. If not needed, do not declare it.
        # mgt_net_proxy_netmask: '255.255.255.0'

        # (Optional) mgt_net_moid: Network ID for management network connectivity
        # Do not declare if not used.
        # mgt_net_moid: 'dvportgroup-73'

        # ca_file: Name of the certificate file. If insecure is set to True,
        # then this parameter is ignored. If insecure is set to False and this
        # parameter is not defined, then the system root CAs will be used
        # to verify the server certificate.
        ca_file: a/nsx/certificate/file

        # insecure:
        # If true (default), the NSXv server certificate is not verified.
        # If false, then the default CA truststore is used for verification.
        # This option is ignored if "ca_file" is set
        insecure: True

        # (Optional) edge_ha: if true, will duplicate any edge pool resources
        # Default to False if undeclared.
        # edge_ha: False

        # (Optional) spoofguard_enabled:
        # If True (default), indicates NSXV spoofguard component is used to
        # implement port-security feature.
        # spoofguard_enabled: True

        # (Optional) exclusive_router_appliance_size:
        # Edge appliance size to be used for creating exclusive router.
        # Valid values: 'compact', 'large', 'xlarge', 'quadlarge'
        # Defaults to 'compact' if not declared.
        # exclusive_router_appliance_size: 'compact'
0707010000004B000081A40000000000000000000000015D827255000007BC000000000000000000000000000000000000006F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/pass_through.yml#
# (c) Copyright 2017-2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
  version: 2
pass-through:
  global:
    vmware:
      - username: <vcenter-admin-username>
        ip: <vcenter-ip>
        port: 443
        cert_check: false
        # The password needs to be encrypted using the script
        # openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
        # $ ./ardanaencrypt.py
        #
        # The script will prompt for the vCenter password. The string
        # generated is the encrypted password. Enter the string
        # enclosed by double-quotes below.
        password: "<encrypted-passwd-from-ardanaencrypt>"

        # The id is is obtained by the URL
        # https://<vcenter-ip>/mob/?moid=ServiceInstance&doPath=content%2eabout,
        # field instanceUUID.
        id: <vcenter-uuid>
  servers:
      # Here the 'id' refers to the name of the node running the
      # esx-compute-proxy. This is identical to the 'servers.id' in
      # servers.yml. There should be one esx-compute-proxy node per ESX
      # resource pool.
    - id: esx-compute1
      data:
        vmware:
          vcenter_cluster: <vmware cluster1 name>
          vcenter_id: <vcenter-uuid>
    - id: esx-compute2
      data:
        vmware:
          vcenter_cluster: <vmware cluster2 name>
          vcenter_id: <vcenter-uuid>
0707010000004C000081A40000000000000000000000015D8272550000094D000000000000000000000000000000000000007000000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/server_groups.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  server-groups:

    #
    # Server Groups provide a mechanism for organizing servers
    # into a hierarchy that reflected the physical topology.
    #
    # When allocating a server the configuration processor
    # will search down the hierarchy from the list of server
    # groups identified as the failure-zones for the control
    # plane until it finds an available server of the requested
    # role.   If the allocation policy is "strict" servers are
    # allocated from different failure-zones.
    #
    # When determining which network from a network group to
    # associate with a server the configuration processor will
    # search up the hierarchy from the server group containing the
    # server until it finds a network in the required network
    # group.
    #

    #
    # In this example there is only one network in each network
    # group and so we put all networks in the top level server
    # group.   Below this we create server groups for three
    # failure zones, within which servers are grouped by racks.
    #
    # Note: the association of servers to server groups is part
    # of the server definition (servers.yml)
    #

    #
    # At the top of the tree we have a server groups for
    # networks that can reach all servers
    #
    - name: CLOUD
      server-groups:
        - AZ1
        - AZ2
        - AZ3
      networks:
        - EXTERNAL-API-NET
        - MANAGEMENT-NET
        - INTERNAL-API-NET

    #
    # Create a group for each failure zone
    #
    - name: AZ1
      server-groups:
        - RACK1

    - name: AZ2
      server-groups:
        - RACK2

    - name: AZ3
      server-groups:
        - RACK3

    #
    # Create a group for each rack
    #
    - name: RACK1

    - name: RACK2

    - name: RACK3
0707010000004D000081A40000000000000000000000015D82725500000351000000000000000000000000000000000000006F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/server_roles.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  server-roles:

    - name: CONTROLLER-ROLE
      interface-model: CONTROLLER-INTERFACES
      disk-model: CONTROLLER-1TB-DISKS

    - name: ESX-COMPUTE-ROLE
      interface-model: ESX-COMPUTE-INTERFACES
      disk-model: COMPUTE-NODE-DISKS
0707010000004E000081A40000000000000000000000015D82725500000936000000000000000000000000000000000000006A00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/servers.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2

  baremetal:
    # NOTE: These values need to be changed to match your environment.
    # Define the network range that contains the ip-addr values for
    # the individual servers listed below.
    subnet: 192.168.10.0
    netmask: 255.255.255.0

  servers:
    # NOTE: Addresses of servers need to be
    #       changed to match your environment.
    #
    #       Add additional servers as required

    # Controllers
    - id: controller1
      ip-addr: 192.168.10.3
      role: CONTROLLER-ROLE
      server-group: RACK1
      nic-mapping: ESXI_VMXNET3_4PORT
      mac-addr: "b2:72:8d:ac:7c:6f"
      ilo-ip: 192.168.9.3
      ilo-password: password
      ilo-user: admin

    - id: controller2
      ip-addr: 192.168.10.4
      role: CONTROLLER-ROLE
      server-group: RACK2
      nic-mapping: ESXI_VMXNET3_4PORT
      mac-addr: "8a:8e:64:55:43:76"
      ilo-ip: 192.168.9.4
      ilo-password: password
      ilo-user: admin

    - id: controller3
      ip-addr: 192.168.10.5
      role: CONTROLLER-ROLE
      server-group: RACK3
      nic-mapping: ESXI_VMXNET3_4PORT
      mac-addr: "26:67:3e:49:5a:a7"
      ilo-ip: 192.168.9.5
      ilo-password: password
      ilo-user: admin

    # Nova Compute proxy node
    - id: esx-compute1
      server-group: RACK1
      nic-mapping: ESXI-COMPUTE-3PORT
      ip-addr: 192.168.10.6
      mac-addr: "00:de:ad:be:ef:10"
      role: ESX-COMPUTE-ROLE
      ilo-ip: 1.1.1.10
      ilo-user: dummy-user
      ilo-password: dummy-password

    - id: esx-compute2
      server-group: RACK1
      nic-mapping: ESXI-COMPUTE-3PORT
      ip-addr: 192.168.10.7
      mac-addr: "00:de:ad:be:ef:11"
      role: ESX-COMPUTE-ROLE
      ilo-ip: 1.1.1.11
      ilo-user: dummy-user
      ilo-password: dummy-password
0707010000004F000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000006400000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/swift07070100000050000081A40000000000000000000000015D8272550000060A000000000000000000000000000000000000007500000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/examples/models/entry-scale-nsxv/data/swift/swift_config.yml#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
  version: 2

configuration-data:
  - name: SWIFT-CONFIG-CP1
    services:
      - swift
    data:
      control_plane_rings:
        swift-zones:
          - id: 1
            server-groups:
              - AZ1
          - id: 2
            server-groups:
              - AZ2
          - id: 3
            server-groups:
              - AZ3
        rings:
          - name: account
            display-name: Account Ring
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3

          - name: container
            display-name: Container Ring
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3

          - name: object-0
            display-name: General
            default: yes
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3
07070100000051000041ED0000000000000000000000035D82725500000000000000000000000000000000000000000000004100000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services07070100000052000041ED0000000000000000000000025D82725500000000000000000000000000000000000000000000004800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware07070100000053000081A40000000000000000000000015D82725500000285000000000000000000000000000000000000005000000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/nsx.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

services:
-   name: nsx
    mnemonic: NSX
07070100000054000081A40000000000000000000000015D8272550000039D000000000000000000000000000000000000005C00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt-dns.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxt-dns
    mnemonic: VMW-NSXT-DNS
    service: nsx

    requires:
      - name: vmware-nsxt
        scope: host

    provides-data:
      - to:
          - name: vmware-nsxt
        data:
          - option: nsx_extension_drivers
            values:
              - vmware_nsxv3_dns
07070100000055000081A40000000000000000000000015D827255000005A4000000000000000000000000000000000000005E00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt-fwaas.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxt-fwaas
    mnemonic: VMW-NSXT-FWAAS
    service: nsx

    requires:
      - name: vmware-nsxt
        scope: host

    provides-data:
      - to:
          - name: neutron-server
        data:
          - option: service_plugins
            values:
              - neutron_fwaas.services.firewall.fwaas_plugin_v2.FirewallPluginV2
          - option: fwaas_service_provider
            values:
              - FIREWALL_V2:fwaas_db:neutron_fwaas.services.firewall.service_drivers.agents.agents.FirewallAgentDriver:default
          - option: fwaas_driver
            values:
              - vmware_nsxv3_edge_v2
          - option: policy_json
            values:
              - source:  ../../vmware-nsx/templates/policy.d/neutron-fwaas.json.j2
                dest: policy.d/nsxt-neutron-fwaas.json
07070100000056000081A40000000000000000000000015D827255000004D5000000000000000000000000000000000000005D00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt-l2gw.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxt-l2gateway
    mnemonic: VMW-NSXT-L2GW
    service: nsx

    requires:
      - name: vmware-nsxt
        scope: host

    provides-data:
      - to:
          - name: neutron-server
        data:
          - option: service_plugins
            values:
              - networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin
          - option: l2gw_service_provider
            values:
              - L2GW:vmware-nsx-l2gw:vmware_nsx.services.l2gateway.nsx_v3.driver.NsxV3Driver:default
          - option: neutron_server_config_file_args
            values:
              - l2gw_plugin.ini
07070100000057000081A40000000000000000000000015D82725500000520000000000000000000000000000000000000005E00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt-lbaas.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxt-lbaas
    mnemonic: VMW-NSXT-LBAAS
    service: nsx

    requires:
      - name: vmware-nsxt
        scope: host

    provides-data:
      - to:
          - name: neutron-server
        data:
          - option: service_plugins
            values:
              - vmware_nsx.services.lbaas.nsx_plugin.LoadBalancerNSXPluginV2
          - option: lbaas_service_provider
            values:
              - LOADBALANCERV2:VMWareEdge:neutron_lbaas.drivers.vmware.edge_driver_v2.EdgeLoadBalancerDriverV2:default
          - option: api_extensions_path
            values:
              - '{{ ''neutron'' | venv_dir }}/lib/python2.7/site-packages/neutron_lbaas/extensions'
07070100000058000081A40000000000000000000000015D82725500000310000000000000000000000000000000000000005D00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt-node.yml#
# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:

  - name: vmware-nsxt-node
    mnemonic: VMW-NSXT-NODE
    service: nsx

    endpoints:
      - port: 22
        protocol: tcp
        roles:
          - ssh
07070100000059000081A40000000000000000000000015D827255000003B3000000000000000000000000000000000000005C00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt-qos.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxt-qos
    mnemonic: VMW-NSXT-QOS
    service: nsx

    requires:
      - name: vmware-nsxt
        scope: host

    provides-data:
      - to:
          - name: neutron-server
        data:
          - option: service_plugins
            values:
              - neutron.services.qos.qos_plugin.QoSPlugin
0707010000005A000081A40000000000000000000000015D82725500000508000000000000000000000000000000000000005F00000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt-vpnaas.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxt-vpnaas
    mnemonic: VMW-NSXT-VPNAAS
    service: nsx

    requires:
      - name: vmware-nsxt
        scope: host

    provides-data:
      - to:
          - name: neutron-server
        data:
          - option: service_plugins
            values:
              - vmware_nsx.services.vpnaas.nsx_plugin.NsxVPNPlugin
          - option: vpnaas_service_provider
            values:
              - VPN:vmware:vmware_nsx.services.vpnaas.nsxv3.ipsec_driver.NSXv3IPsecVpnDriver:default
          - option: api_extensions_path
            values:
              - '{{ ''neutron'' | venv_dir }}/lib/python2.7/site-packages/neutron_vpnaas/extensions'
0707010000005B000081A40000000000000000000000015D827255000006B3000000000000000000000000000000000000005800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxt.yml# (c) Copyright 2019 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxt
    mnemonic: VMW-NSXT
    service: nsx

    consumes-services:
      - service-name: NOV-MTD

    provides-data:
      - to:
          - name: neutron-server
        data:
          - option: core_plugin
            values:
              - vmware_nsx.plugin.NsxV3Plugin
          - option: config_files
            values:
              - source: ../../vmware-nsx/templates/nsxt.ini.j2
                dest: nsxt.ini
          - option: neutron_server_config_file_args
            values:
              - nsxt.ini
          - option: neutron_db_manage_config_file_args
            values:
              - nsxt.ini
          - option: policy_json
            values:
              - source:  ../../vmware-nsx/templates/policy.d/security-groups.json.j2
                dest: policy.d/nsxt-security-groups.json
              - source:  ../../vmware-nsx/templates/policy.d/routers.json.j2
                dest: policy.d/routers.json
      - to:
          - name: nova-compute-kvm
        data:
          - option: ovs_bridge
            values:
              - nsx-managed
0707010000005C000081A40000000000000000000000015D82725500000626000000000000000000000000000000000000005800000000ardana-extensions-nsx-9.0+git.1568830037.2eea267/vmware/services/vmware/vmware-nsxv.yml# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
    version: 2

service-components:
-   name: vmware-nsxv
    mnemonic: VMW-NSXV
    service: nsx

    consumes-services:
      - service-name: NOV-MTD

    provides-data:
      - to:
          - name: neutron-server
        data:
          - option: core_plugin
            values:
              - vmware_nsx.plugin.NsxVPlugin
          - option: config_files
            values:
              - source: ../../vmware-nsx/templates/nsxv.ini.j2
                dest: nsxv.ini
          - option: neutron_server_config_file_args
            values:
              - nsxv.ini
          - option: neutron_db_manage_config_file_args
            values:
              - nsxv.ini
          - option: policy_json
            values:
              - source:  ../../vmware-nsx/templates/policy.d/routers.json.j2
                dest: policy.d/routers.json
              - source:  ../../vmware-nsx/templates/policy.d/security-groups.json.j2
                dest: policy.d/nsxv-security-groups.json
07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!353 blocks
openSUSE Build Service is sponsored by