tree: caa13c9b6fcd62c0edbc154199d105e0840ef8d2 [path history] [tgz]
  1. LICENSE
  2. README.md
  3. ansible.cfg
  4. arizona-compute.yml
  5. arizona-hosts
  6. arizona-setup.yml
  7. bootstrap.sh
  8. cloudlab-compute.yml
  9. cloudlab-hosts
  10. cloudlab-setup.yml
  11. cord-hosts
  12. cord-post-install.yml
  13. cord-setup.yml
  14. cord-test-hosts
  15. docs/
  16. enable-virt-dell.yml
  17. files/
  18. princeton-hosts
  19. princeton-setup.yml
  20. scripts/
  21. singapore-compute.yml
  22. singapore-hosts
  23. singapore-setup.yml
  24. stanford-compute.yml
  25. stanford-hosts
  26. stanford-setup.yml
  27. tasks/
  28. templates/
README.md

openstack-cluster-setup

This repository contains Ansible playbooks for installing and configuring an OpenStack Kilo cluster for use with XOS. This is how we build clusters for OpenCloud, and is the method of installing a CORD development POD as well.

All of the OpenStack controller services are installed in VMs on a single "head node" and connected by an isolated private network. Juju is used to install and configure the OpenStack services.

Prerequisites (OpenCloud and CORD)

  • Set up control machine: The install playbooks in this repository can either run on a separate control machine (e.g., a laptop) or on the cluster head node. Either way:
    • Install a recent version of Ansible (Ansible 1.9.x on Mac OS X or Ubuntu should work).
    • Be able to login to all of the cluster servers from the control machine using SSH.
  • Set up servers: One server in the cluster will be the "head" node, running the OpenStack services. The rest will be "compute" nodes.
  • Install Ubuntu 14.04 LTS on all servers.
  • The user account used to login from the control machine must have sudo access.
  • Each server should have a single active NIC (preferably eth0) with connectivity to the Internet.

How to install a CORD POD

The CORD POD install procedure uses the "head node" of the cluster as the control machine for the install. As mentioned above, install Ansible on the head node and check out this repository.

The playbooks assume that a bridge called mgmtbr on the head node is connected to the management network. Note that also there must be a DHCP server on the management network that:

  1. hands out IP addresses to VMs connected to mgmtbr
  2. resolves VM names to IP addresses
  3. is configured as a resolver on the head and compute nodes

If you need to set up dnsmasq to do this, take a look at this example. Then follow these steps:

  • Edit cord-hosts with the DNS names of your compute nodes, and update the ansible_ssh_user variable appropriately. Before proceeding, this needs to work on the head node: ansible -i cord-hosts all -m ping
  • Run: ansible-playbook -i cord-hosts cord-setup.yml
  • After the playbook finishes, wait for the OpenStack services to come up. You can check on their progress using juju status --format=tabular
  • Once the services are up, you can use the admin-openrc.sh credentials in the home directory to interact with OpenStack. You can SSH to any VM using ssh ubuntu@<vm-name>

This will bring up various OpenStack services, including Neutron with the VTN plugin. It will also create two VMs called xos and onos-cord and prep them. Configuring and running XOS and ONOS in these VMs is beyond the scope of this README.

Caveats

  • The goal is to configure HA for the OpenStack services, but this is not yet implemented.

How to install an OpenCloud cluster

Once the prerequisites are satisfied, here are the basic steps for installing a new OpenCloud cluster named 'foo':

  • Create foo-setup.yml and foo-compute.yml files using cloudlab-setup.yml and cloudlab-compute.yml as templates. Create a foo-hosts file with the DNS names of your nodes based on cloudlab-hosts.
  • If you are not installing on CloudLab, edit foo-hosts and add cloudlab=False under [all:vars].
  • If you are installing a cluster for inclusion in the public OpenCloud, change mgmt_net_prefix in foo-setup.yml to be unique across all OpenCloud clusters.
  • To set up Juju, use it to install the OpenStack services on the head node, and prep the compute nodes, run on the head node:
$ ansible-playbook -i foo-hosts foo-setup.yaml
  • Log into the head node. For each compute node, put it under control of Juju, e.g.:
$ juju add-machine ssh:ubuntu@compute-node
  • To install the nova-compute service on the compute nodes that were added to Juju, run on the control machine:
$ ansible-playbook -i foo-hosts foo-compute.yaml

Caveats

  • The installation configures port forwarding so that the OpenStack services can be accessed from outside the private network. Some OpenCloud-specific firewalling is also introduced, which will likely require modification for other setups. See: files/etc/libvirt/hooks/qemu.
  • By default the compute nodes are controlled and updated automatically using ansible-pull from this repo. You may want to change this.
  • All of the service interfaces are configured to use SSL because that's what OpenCloud uses in production. To turn this off, look for the relevant Juju commands in cloudlab-setup.yaml.