CORD-714
initial set of roles/playbooks
bug fixes
fix docker-compose logging, pull xos-base image
dynamically add docker instances to ansible inventory after onboarding
Bootstrap the onboarding synchronizer
more defaults, reload vars after adding docker inventory
move TOSCA templates to cord-profilea, random password on admin
fixes for docker compose, paths in xos.yaml in jinja
don't try to mangle XOS _config files (yet)
create xos-test image
make xos-test use locally build xosproject/xos
add docker-compose v2 format networking
fix docker-compose template
path fixes, move ssh keys
service names/paths aren't so simple
added api-tests, teardown roles
scan the onboarded docker-compose file for ansible inventory
add various tests used by test-standalone profile
fixes for API tests
save test output to /tmp/<testname>.out, bugfixes
autogenerate openstack keystone admin password, fix tests
remove nonfunctional UI tests
change location of cord_profile, use inventory to specify profile
fix YAML escaping of backslashes in regex
bugfixes after path change, add teardown playbook
backout setting of cord_dir with ansible_user_dir which differs depending on context
gradle build fixes, renaming
fix yml/yaml naming issue
null xos_images default
added rcord/mcord frontend variants, exampleservice onboarding
add missing role, help text in cord-bootstrap.sh
bugfix
create/run deployment.yaml by default
allow teardown to handle partially built pods, bugfix to deployment.yaml generation
add defaults, fix path for exampleservice
revert yaml naming to ease testing, rename mocks
debugging
exampleservice onboarding, mounting volume in XOS container
bugfix
add volume mounts when creating xos_ui, don't double add to ansible inventory
post-onboard TOSCA cnfig
typo fixes, order of loading TOSCA
config bits for cord-pod, some var renaming
update documentation, rename to rcord
doc fixes
support for building just before XOS install, docs
fix tests, refactor how compute nodes are configed, split vtn service config from adding a node
remove build process from deploy repo
inclusion/merge of PKI support
typo
bugfixes and change to use cord instead of opencord for install dir
fix pki support
fix ssh key paths
update xos ui/bs ports, fix onboarding on vagrant
have compute enlist script use same config file as other playbooks
fix ports, add MaaS version of compute node enable script
fix port and nodes.yaml loading
generate API SSL cert for all profiles
remove cord-app-build which is vestigial
remove config dir
default xos_ui_port in xos-ready role
use xostosca from service-profie/cord-pod-ansible to handle POST form-encode
fix nodes.yaml, variable name in xostosca, and include openstack properly
copy cert chain to build into XOS container
increase onboarding timeouts, don't restart docker
fix ONOS app versions and network settings
fix management_hosts network optional include
fix management/fabric settings
avoid modifying service#ONOS_CORD when adding nodes
split out compute node and vtn config, put delay between
fix template generation and fail on file not found
rename vars to profile_manifests, fix redis include
whitespace fix
increase timeout
reenable platform-check
parameterize node_key path, set defaults and fix platform-check
workaround for onboarding sync, minor fixes
pause in middle of VTN bug workaround
reload openstack config as well
disable platform-check role as a test
fixed head-diag role
reapply VTN config during compute node enable
Create exampleservice instance during test

Change-Id: I87e171bcfa429e65e1075a1ee4c97de1e90a7dd5
diff --git a/.gitignore b/.gitignore
index a8b42eb..864cb97 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1 +1,3 @@
+*.pyc
 *.retry
+credentials/*
diff --git a/INSTALL_SINGLE_NODE.md b/INSTALL_SINGLE_NODE.md
deleted file mode 100644
index b1489e0..0000000
--- a/INSTALL_SINGLE_NODE.md
+++ /dev/null
@@ -1,82 +0,0 @@
-# Installing a CORD POD on a Single Physical Host
-
-*A full description of how to bring up a CORD POD on a single physical host, using the CORD developer
-environment, is [described here](https://github.com/opencord/cord/blob/master/docs/quickstart.md).
-That's probably what you want.*
-
-This page describes a simple alternative method for setting up a single-node POD that does not
-require a separate build host running Vagrant.  It's mainly for developers looking to
-set up a custom POD and run tests on it.
-
-## What you need (Prerequisites)
-You need a target server meeting the requirements below:
-* Fresh install of Ubuntu 14.04 LTS with latest updates
-* Minimum 12 CPU cores, 48GB RAM, 1TB disk
-* Access to the Internet
-* A user account with password-less *sudo* capability (e.g., the *ubuntu* user)
-
-## Run scripts/single-node-pod.sh
-
-The [single-node-pod.sh](scripts/single-node-pod.sh) script in the `scripts` directory
-of this repository can be used to build and test a single-node CORD POD.
-It should be run on the target server in a user account with password-less
-*sudo* capability.  The most basic way to run the script is as follows:
-
-```
-$ wget https://raw.githubusercontent.com/opencord/platform-install/master/scripts/single-node-pod.sh
-$ bash single-node-pod.sh
-```
-
-The script will load the necessary software onto the target server, download the `master` branch of
-this repository, and run an Ansible playbook to set up OpenStack, ONOS, and XOS.
-
-Note that this process
-will take at least an hour!  Also some individual steps in the playbook can take 30 minutes or more.
-*Be patient!*
-
-### Script options
-
-Run `bash single-node-pod.sh -h` for a list of options:
-
-```
-~$ bash single-node-pod.sh -h
-Usage:
-    single-node-pod.sh                install OpenStack and prep XOS and ONOS VMs [default]
-    single-node-pod.sh -b <branch>    checkout <branch> of the xos git repo
-    single-node-pod.sh -c             cleanup from previous test
-    single-node-pod.sh -d             don't run diagnostic collector
-    single-node-pod.sh -h             display this help message
-    single-node-pod.sh -i <inv_file>  specify an inventory file (default is inventory/single-localhost)
-    single-node-pod.sh -p <git_url>   use <git_url> to obtain the platform-install git repo
-    single-node-pod.sh -r <git_url>   use <git_url> to obtain the xos git repo
-    single-node-pod.sh -s <branch>    checkout <branch> of the platform-install git repo
-    single-node-pod.sh -t             do install, bring up cord-pod configuration, run E2E test
-```
-
-A few useful options are:
-
-The `-s` option can be used to install different versions of the CORD POD.  For example, to install
-the latest CORD v1.0 release candidate:
-
-```
-~$ bash single-node-pod.sh -s cord-1.0
-```
-
-The `-t` option runs a couple of tests on the POD after it has been built:
-  - `test-vsg:` Adds a CORD subscriber to XOS, brings up a vSG for the subscriber, creates a simulated
-     device in the subscriber's home (using an LXC container), and runs a `ping` from the device
-     through the vSG to the Internet.  This test demonstrates that the vSG is working.
-  - `test-exampleservice:` Assumes that `test-vsg` has already been run to set up a vSG.  Onboards
-     the `exampleservice` described in the
-     [Tutorial on Assembling and On-Boarding Services](https://wiki.opencord.org/display/CORD/Assembling+and+On-Boarding+Services%3A+A+Tutorial)
-     and creates an `exampleservice` tenant in XOS.  This causes the `exampleservice` synchronizer
-     to spin up a VM, install Apache in the VM, and configure Apache with a "hello world" welcome message.
-     This test demonstrates a customer-facing service being added to the POD.
-
-The `-c` option deletes all state left over from a previous install.  For example, the
-[nightly Jenkins E2E test](https://jenkins.opencord.org/job/cord-single-node-pod-e2e/) runs
-the script as follows:
-```
-~$ bash single-node-pod.sh -c -t
-```
-This invocation cleans up the previous build, brings up the POD, and runs the tests described above.
diff --git a/README.md b/README.md
index 289825f..888fb02 100644
--- a/README.md
+++ b/README.md
@@ -1,18 +1,139 @@
-# platform-install
+# CORD platform-install
 
 This repository contains [Ansible](http://docs.ansible.com) playbooks for
-installing and configuring software components on a CORD POD: OpenStack, ONOS,
-and XOS.  It is a sub-module of the [main CORD
+installing and configuring software components that build a CORD POD:
+OpenStack, ONOS, and XOS.
+
+It is used as a sub-module of the [main CORD
+repository](https://github.com/opencord/cord), but can independently bring up
+various CORD profiles for development work.
+
+If you want to set up an entire CORD pod on physical hardware, or set up the
+Cord-in-a-Box deployment, you should start at the [CORD
 repository](https://github.com/opencord/cord).
 
-To install a single-node CORD POD, read
-[INSTALL_SINGLE_NODE.md](./INSTALL_SINGLE_NODE.md).
+## Using platform-install for development
 
-Otherwise you should start with the [CORD
-repository](https://github.com/opencord/cord).
+### Bootstrapping your development environment
 
-# Lint checking your code
+There's a helper script,
+[scripts/cord-bootstrap.sh](https://github.com/opencord/platform-install/blob/master/scripts/cord-bootstrap.sh).
+that will install development environment prerequisites on a Ubuntu 14.04 node.
+You can download it with:
 
-Before commit, please run `scripts/lintcheck.sh`, which will perform the same
-lint check that Jenkins performs when in review in Gerrit.
+```
+curl -O ~/cord-bootstrap.sh https://github.com/opencord/platform-install/raw/master/scripts/cord-bootstrap.sh
+```
+
+Running the script will install the [repo](https://code.google.com/p/git-repo/)
+tool, [Ansible](https://docs.ansible.com/ansible/index.html), and
+[Docker](https://www.docker.com/), as well as make a checkout of the CORD
+[manifest](https://gerrit.opencord.org/gitweb?p=manifest.git;a=blob;f=default.xml)
+into `~/cord`.
+
+You can specify which gerrit changesets you would like repo to checkout using
+the `-b` option on the script [as documented
+here](https://github.com/opencord/cord/blob/master/docs/quickstart.md#using-cord-in-a-boxsh-to-download-development-code-from-gerrit).
+
+Once you have done this, if you're not already in the `docker` group, you
+should logout and log back into your system to refresh your user account's
+group membership. If you don't do this, any docker command you run or ansible
+runs for you will fail. You can check your group membership by running
+`groups`.
+
+Once you log back in, you may want to run `tmux` to maintain a server-side
+session you can reconnect to, in case of network trouble.
+
+All of the commands below assume you're in the `cord/build/platform-install`
+directory.
+
+### Credentials
+
+Credentials will be autogenerated and placed in the `credentials/` directory
+when the playbooks are run, where the credential name is the filename, and the
+contents of the file is the password.
+
+For most profiles the XOS admin user is named `xosadmin@opencord.org`.
+
+### Development Loop
+
+Most profiles are run by specifying an inventory file when running
+`ansible-playbook`.  Most of the time, you want to run the
+`deploy-xos-playbook.yml` playbook.
+
+For example, to run the `frontend` config, you would run:
+
+```
+ansible-playbook -i inventory/frontend deploy-xos-playbook.yml
+```
+
+Assuming it runs without error, you can then explore the environment you've set
+up.  When you're ready to tear down your environment, run:
+
+```
+ansible-playbook -i inventory/frontend teardown-playbook.yml
+```
+
+This will destroy all the docker containers created, and delete the
+`~/cord_profile` directory.
+
+You can then re-run, or run a different profile.
+
+### Creating a new CORD profile
+
+To create a new CORD profile, you should:
+
+1. Create an inventory file in `inventory/` that defines the `cord_profile`
+   variable, with the name of your profile.
+
+```
+[all:vars]
+cord_profile=my-profile
+```
+
+2. Create a .yaml variables file in `profile_manifests/` with the name of your
+   profile (ex: `my-profile.yaml`), and populate it with your configuration.
+
+3. To test the profile, run the `deploy-xos-playbook.yml` playbook using your
+   inventory profile:
+   `ansible-playbook -i inventory/my-profile deploy-xos-playbook.yml`
+
+### Making changes and lint checking your changes
+
+Before commit, please run `./scripts/lintcheck.sh .` in the repo root, which
+will perform the same [ansible-lint](https://pypi.python.org/pypi/ansible-lint)
+check that Jenkins performs when in review in gerrit.
+
+## Specific profiles notes
+
+### api-test
+
+This profile runs API tests for both the REST and TOSCA APIs. This can be done
+in an automated fashion:
+
+`ansible-playbook -i inventory/api-test api-test-playbook.yml`
+
+The XOS credentials in this config are `padmin@vicci.org` and `letmein` (until
+the tests are modified to support generated credentials).
+
+### frontend
+
+Builds a basic XOS frontend installation, useful for UI testing and
+experimentation.
+
+### mock-rcord, mock-mcord
+
+Builds a Mock R-CORD or Mock M-CORD pod, without running service synchronizers
+in a manner similar to frontend.
+
+### opencloud
+
+Used as a part of the [OpenCloud](http://www.opencloud.us/) deployment. Similar
+to `rcord`.
+
+### rcord
+
+Used as a part of the [R-CORD](https://github.com/opencord/cord) deployment.
+Sets up infrastructure pieces including OpenStack (via Juju) and ONOS as well
+as XOS.
 
diff --git a/Vagrantfile b/Vagrantfile
deleted file mode 100644
index 5db90c5..0000000
--- a/Vagrantfile
+++ /dev/null
@@ -1,28 +0,0 @@
-# -*- mode: ruby -*-
-# vi: set ft=ruby :
-
-Vagrant.configure(2) do |config|
-
-  if (/cygwin|mswin|mingw|bccwin|wince|emx/ =~ RUBY_PLATFORM) != nil
-    config.vm.synced_folder ".", "/platform-install", mount_options: ["dmode=700,fmode=600"]
-  else
-    config.vm.synced_folder ".", "/platform-install"
-  end
-
-  config.vm.define "platdev" do |d|
-    d.ssh.forward_agent = true
-    d.vm.box = "ubuntu/trusty64"
-    d.vm.hostname = "platdev"
-    d.vm.network "private_network", ip: "10.100.198.200"
-    d.vm.provision :shell, path: "scripts/bootstrap_ansible.sh"
-    d.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /platform-install/ansible/platdev.yml -c local"
-    d.vm.provider "virtualbox" do |v|
-      v.memory = 2048
-    end
-  end
-
-  if Vagrant.has_plugin?("vagrant-cachier")
-    config.cache.scope = :box
-  end
-
-end
diff --git a/add-bootstrap-containers-playbook.yml b/add-bootstrap-containers-playbook.yml
new file mode 100644
index 0000000..ebb0890
--- /dev/null
+++ b/add-bootstrap-containers-playbook.yml
@@ -0,0 +1,17 @@
+---
+# add-bootstrap-containers-playbook.yml
+
+- name: Add bootstrapped containers to inventory
+  hosts: head
+  roles:
+    - xos-bootstrap-hosts
+
+- name: Re-include vars after adding bootstrap containers
+  hosts: xos_bootstrap_ui
+  tasks:
+    - name: Include variables
+      include_vars: "{{ item }}"
+      with_items:
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
+
diff --git a/add-onboard-containers-playbook.yml b/add-onboard-containers-playbook.yml
new file mode 100644
index 0000000..4deba1b
--- /dev/null
+++ b/add-onboard-containers-playbook.yml
@@ -0,0 +1,17 @@
+---
+# add-onboard-containers-playbook.yml
+
+- name: Add onboarded containers to inventory
+  hosts: head
+  roles:
+    - xos-onboard-hosts
+
+- name: Re-include vars after adding onboarded containers
+  hosts: xos_ui
+  tasks:
+    - name: Include variables
+      include_vars: "{{ item }}"
+      with_items:
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
+
diff --git a/ansible/ansible.cfg b/ansible/ansible.cfg
deleted file mode 100644
index bd331b2..0000000
--- a/ansible/ansible.cfg
+++ /dev/null
@@ -1,9 +0,0 @@
-[defaults]
-callback_plugins=/etc/ansible/callback_plugins/
-host_key_checking=False
-deprecation_warnings=False
-
-[privilege_escalation]
-become=True
-become_method=sudo
-become_user=root
diff --git a/ansible/platdev.yml b/ansible/platdev.yml
deleted file mode 100644
index d3a34fc..0000000
--- a/ansible/platdev.yml
+++ /dev/null
@@ -1,7 +0,0 @@
-- hosts: localhost
-  remote_user: vagrant
-  serial: 1
-  roles:
-    - common
-    - java8-oracle
-    - buildtools
diff --git a/ansible/roles/buildtools/defaults/main.yml b/ansible/roles/buildtools/defaults/main.yml
deleted file mode 100644
index b7568df..0000000
--- a/ansible/roles/buildtools/defaults/main.yml
+++ /dev/null
@@ -1,2 +0,0 @@
-apt_packages:
-  - maven
diff --git a/ansible/roles/buildtools/tasks/main.yml b/ansible/roles/buildtools/tasks/main.yml
deleted file mode 100644
index 6c0e3fa..0000000
--- a/ansible/roles/buildtools/tasks/main.yml
+++ /dev/null
@@ -1,6 +0,0 @@
-- name: Apt packages
-  apt:
-    name: "{{ item }}"
-  with_items: "{{ apt_packages }}"
-  tags: [buildtools]
-
diff --git a/ansible/roles/common/defaults/main.yml b/ansible/roles/common/defaults/main.yml
deleted file mode 100644
index 4aa8032..0000000
--- a/ansible/roles/common/defaults/main.yml
+++ /dev/null
@@ -1,13 +0,0 @@
-hosts: [
-  { host_ip: "10.100.198.200", host_name: "platdev"},
-]
-
-use_latest_for:
-  - debian-keyring
-  - debian-archive-keyring
-  - rng-tools
-  - python-netaddr
-
-obsolete_services:
-  - puppet
-  - chef-client
diff --git a/ansible/roles/common/files/ssh_config b/ansible/roles/common/files/ssh_config
deleted file mode 100644
index 990a43d..0000000
--- a/ansible/roles/common/files/ssh_config
+++ /dev/null
@@ -1,3 +0,0 @@
-Host *
-   StrictHostKeyChecking no
-   UserKnownHostsFile=/dev/null
diff --git a/ansible/roles/common/tasks/main.yml b/ansible/roles/common/tasks/main.yml
deleted file mode 100644
index 3ee9d2e..0000000
--- a/ansible/roles/common/tasks/main.yml
+++ /dev/null
@@ -1,40 +0,0 @@
-- name: JQ is present
-  apt:
-    name: jq
-    force: yes
-  tags: [common]
-
-- name: Host is present
-  lineinfile:
-    dest: /etc/hosts
-    regexp: "^{{ item.host_ip }}"
-    line: "{{ item.host_ip }} {{ item.host_name }}"
-  with_items: "{{ hosts }}"
-  tags: [common]
-
-- name: Latest apt packages
-  apt:
-    name: "{{ item }}"
-  with_items: "{{ use_latest_for }}"
-  tags: [common]
-
-- name: Services are not running
-  service:
-    name: "{{ item }}"
-    state: stopped
-  ignore_errors: yes
-  with_items: "{{ obsolete_services }}"
-  tags: [common]
-
-- name: Ensure known_hosts file is absent
-  file:
-    path: /home/vagrant/.ssh/known_hosts
-    state: absent
-
-- name: Disable Known Host Checking
-  copy:
-    src: files/ssh_config
-    dest: /home/vagrant/.ssh/config
-    owner: vagrant
-    group: vagrant
-    mode: 0600
diff --git a/ansible/roles/java8-oracle/tasks/main.yml b/ansible/roles/java8-oracle/tasks/main.yml
deleted file mode 100644
index b46950a..0000000
--- a/ansible/roles/java8-oracle/tasks/main.yml
+++ /dev/null
@@ -1,22 +0,0 @@
----
-- name: Install add-apt-repository
-  become: yes
-  apt: name=software-properties-common state=installed
-
-- name: Add Oracle Java repository
-  become: yes
-  apt_repository:
-    repo: "{{ java_apt_repo | default('ppa:webupd8team/java') }}"
-    update_cache: yes
-
-- name: Accept Java 8 license
-  become: yes
-  debconf: name='oracle-java8-installer' question='shared/accepted-oracle-license-v1-1' value='true' vtype='select'
-
-- name: Install Oracle Java 8
-  become: yes
-  apt: name={{item}} state=installed
-  with_items:
-  - oracle-java8-installer
-  - ca-certificates
-  - oracle-java8-set-default
diff --git a/api-test-playbook.yml b/api-test-playbook.yml
new file mode 100644
index 0000000..feace00
--- /dev/null
+++ b/api-test-playbook.yml
@@ -0,0 +1,37 @@
+---
+# api-test-playbook.yml
+
+- include: deploy-xos-playbook.yml
+
+- name: Prep for the API tests
+  hosts: xos_ui
+  connection: docker
+  roles:
+    - api-test-prep
+
+- name: Clear the XOS database
+  hosts: xos_db
+  connection: docker
+  roles:
+    - xos-clear-db
+
+- name: Run API tests
+  hosts: xos_ui
+  connection: docker
+  roles:
+    - xos-test-restore-db
+    - api-tests
+
+- name: Clear the XOS database (again)
+  hosts: xos_db
+  connection: docker
+  roles:
+    - xos-clear-db
+
+- name: Run TOSCA tests
+  hosts: xos_ui
+  connection: docker
+  roles:
+    - xos-test-restore-db
+    - tosca-tests
+
diff --git a/build.gradle.todelete b/build.gradle.todelete
deleted file mode 100644
index c4bdfda..0000000
--- a/build.gradle.todelete
+++ /dev/null
@@ -1,197 +0,0 @@
-/*
- * Copyright 2012 the original author or authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import org.opencord.gradle.rules.*
-import org.yaml.snakeyaml.Yaml
-import org.yaml.snakeyaml.DumperOptions
-
-allprojects {
-    apply plugin: 'base'
-    apply plugin: 'de.gesellix.docker'
-    //apply plugin: 'com.tmiyamon.config'
-
-    docker {
-        // dockerHost = System.env.DOCKER_HOST ?: 'unix:///var/run/docker.sock'
-        // dockerHost = System.env.DOCKER_HOST ?: 'https://192.168.99.100:2376'
-        // certPath = System.getProperty('docker.cert.path') ?: "${System.getProperty('user.home')}/.docker/machine/machines/default"
-        // authConfigPlain = [
-        //   "username"       : "joe",
-        //   "password"       : "some-pw-as-needed",
-        //   "email"          : "joe@acme.com",
-        //   "serveraddress"  : "https://index.docker.io/v1/"
-        //  ]
-    }
-}
-
-def getCordAppVersion = { ->
-    def StdOut = new ByteArrayOutputStream()
-    exec {
-        commandLine "xpath", "-q", "-e", "project/version/text()", "../../onos-apps/apps/pom.xml"
-        standardOutput = StdOut
-    }
-    return StdOut.toString().trim()
-}
-
-ext {
-
-    // Upstream registry to simplify filling out the comps table below
-    upstreamReg = project.hasProperty('upstreamReg') ? project.getProperty('upstreamReg') : 'docker.io'
-
-    // Deployment target config file (yaml format); this can be overwritten from the command line
-    // using the -PdeployConfig=<file-path> syntax.
-    deployConfig = project.hasProperty('deployConfig') ? project.getProperty('deployConfig') : './config/default.yml'
-
-    println "Using deployment config: $deployConfig"
-    File configFile = new File(deployConfig)
-    def yaml = new Yaml()
-    config = yaml.load(configFile.newReader())
-
-    // Target registry to be used to publish docker images needed for deployment
-    targetReg = project.hasProperty('targetReg')
-        ? project.getProperty('targetReg')
-        : config.docker && config.docker.registry
-            ? config.docker.registry
-            : config.seedServer.ip
-                ? config.seedServer.ip + ":5000"
-                : 'localhost:5000'
-
-    // The tag used to tag the docker images push to the target registry
-    targetTag = project.hasProperty('targetTag')
-        ? project.getProperty('targetTag')
-        : config.docker && config.docker.imageVersion
-            ? config.docker.imageVersion
-            : 'candidate'
-
-    println "targetReg = $targetReg, targetTag = $targetTag"
-
-
-    // Version of the CORD apps to load into ONOS
-    cordAppVersion = project.hasProperty('cordAppVersion')
-        ? project.getProperty('cordAppVersion')
-        : getCordAppVersion()
-
-    println "CORD app version: $cordAppVersion"
-
-    // Component table
-    comps = [
-    ]
-}
-
-task copyAnsibleInventory(type: Copy) {
-  from 'inventory/templates/single-prod'
-    into 'inventory'
-    expand([
-        prod: config.seedServer.ip,
-    ])
-}
-
-task writeYamlConfig {
-  def outvar = config.seedServer
-  def outfilename = "genconfig/deployconf.yml"
-
-  DumperOptions options = new DumperOptions()
-
-  options.setExplicitStart(true);
-  options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK)
-  options.setPrettyFlow(true);
-  options.setIndent(2);
-
-  def yaml = new Yaml(options)
-  Writer outfile = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(outfilename), "utf-8"))
-
-  yaml.dump(outvar, outfile)
-
-  outfile.close()
-}
-
-// ------------- PlaceHolders ----------------
-
-task prime {
-    // TODO this is a place-holder.
-}
-
-// ---------------- Useful tasks ----------------
-
-task fetch << {
-    logger.info 'Platform install has nothing to fetch'
-}
-
-task buildImages << {
-    logger.info 'Platform install has nothing to build'
-}
-
-task publishImages {
-    comps.each { name, spec -> if (spec.type == 'image') { dependsOn "publish" + name } }
-}
-
-task publish {
-    dependsOn publishImages
-}
-
-tasks.addRule(new DockerFetchRule(project))
-tasks.addRule(new DockerPublishRule(project))
-tasks.addRule(new DockerTagRule(project))
-tasks.addRule(new GitSubmoduleUpdateRule(project))
-
-task prepPlatform(type: Exec) {
-    dependsOn copyAnsibleInventory
-    dependsOn writeYamlConfig
-
-    executable = "ansible-playbook"
-    args = ["-i", "inventory/single-prod", "--extra-vars", "@../genconfig/deployconf.yml", "cord-prep-platform.yml"]
-}
-
-task deployOpenStack (type: Exec) {
-    executable = "ansible-playbook"
-    args = ["-i", "inventory/single-prod", "--extra-vars", "@../genconfig/deployconf.yml", "cord-deploy-openstack.yml"]
-}
-
-task deployONOS (type: Exec) {
-    executable = "ansible-playbook"
-    args = ["-i", "inventory/single-prod", "--extra-vars", "@../genconfig/deployconf.yml", "cord-deploy-onos.yml"]
-}
-
-task deployXOS (type: Exec) {
-
-    executable = "ansible-playbook"
-    args = ["-i", "inventory/single-prod", "--extra-vars", "@../genconfig/deployconf.yml", "cord-deploy-xos.yml"]
-}
-
-task setupAutomation (type: Exec) {
-
-    executable = "ansible-playbook"
-    args = ["-i", "inventory/single-prod", "--extra-vars", "@../genconfig/deployconf.yml", "cord-automation.yml"]
-}
-
-deployOpenStack.mustRunAfter prepPlatform
-deployONOS.mustRunAfter deployOpenStack
-deployXOS.mustRunAfter deployONOS
-setupAutomation.mustRunAfter deployXOS
-
-task deployPlatform {
-     dependsOn prepPlatform
-     dependsOn deployOpenStack
-     dependsOn deployONOS
-     dependsOn deployXOS
-     dependsOn setupAutomation
-}
-
-task postDeployTests (type: Exec) {
-
-    executable = "ansible-playbook"
-    args = ["-i", "inventory/single-prod", "--extra-vars", "@../genconfig/deployconf.yml", "cord-post-deploy-playbook.yml"]
-}
-
diff --git a/buildSrc/build.gradle b/buildSrc/build.gradle
deleted file mode 100644
index cbb6652..0000000
--- a/buildSrc/build.gradle
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Copyright 2012 the original author or authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-apply plugin: 'groovy'
-
-repositories {
-    // maven { url 'https://repo.gradle.org/gradle/libs' }
-    maven { url 'https://plugins.gradle.org/m2/' }
-    // mavenCentral()
-}
-
-dependencies {
-    compile gradleApi()
-    compile localGroovy()
-    compile 'de.gesellix:gradle-docker-plugin:2016-05-05T13-15-11'
-    compile 'org.yaml:snakeyaml:1.10'
-    //compile 'gradle.plugin.com.tmiyamon:gradle-config:0.2.1'
-}
diff --git a/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerFetchRule.groovy b/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerFetchRule.groovy
deleted file mode 100644
index a9bb91b..0000000
--- a/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerFetchRule.groovy
+++ /dev/null
@@ -1,47 +0,0 @@
-/*
- * Copyright 2012 the original author or authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.opencord.gradle.rules
-
-import org.gradle.api.Rule
-import de.gesellix.gradle.docker.tasks.DockerPullTask
-
-
-/**
- * Gradle Rule class to fetch a docker image
- */
-class DockerFetchRule implements Rule {
-
-    def project
-
-    DockerFetchRule(project) {
-        this.project = project
-    }
-
-    String getDescription() {
-        'Rule Usage: fetch<component-name>'
-    }
-
-    void apply(String taskName) {
-        if (taskName.startsWith('fetch')) {
-            project.task(taskName, type: DockerPullTask) {
-                ext.compName = taskName - 'fetch'
-                def spec = project.comps[ext.compName]
-                imageName = spec.name + '@' + spec.digest
-            }
-        }
-    }
-}
diff --git a/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerPublishRule.groovy b/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerPublishRule.groovy
deleted file mode 100644
index a1d8164..0000000
--- a/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerPublishRule.groovy
+++ /dev/null
@@ -1,52 +0,0 @@
-/*
- * Copyright 2012 the original author or authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.opencord.gradle.rules
-
-import org.gradle.api.Rule
-import de.gesellix.gradle.docker.tasks.DockerPushTask
-
-
-/**
- * Gradle Rule class to publish (push) a docker image to a private repo
- */
-class DockerPublishRule implements Rule {
-
-    def project
-
-    DockerPublishRule(project) {
-        this.project = project
-    }
-
-    String getDescription() {
-        'Rule Usage: publish<component-name>'
-    }
-
-    void apply(String taskName) {
-        if (taskName.startsWith('publish')) {
-            project.task(taskName, type: DockerPushTask) {
-                ext.compName = taskName - 'publish'
-                println "Publish rule: $taskName + $compName"
-                def tagTask = "tag$compName"
-                println "Tagtask: $tagTask"
-                dependsOn tagTask
-                def spec = project.comps[ext.compName]
-                repositoryName = spec.name + ':' + project.targetTag
-                registry = project.targetReg
-            }
-        }
-    }
-}
diff --git a/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerTagRule.groovy b/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerTagRule.groovy
deleted file mode 100644
index 474e16d..0000000
--- a/buildSrc/src/main/groovy/org/opencord/gradle/rules/DockerTagRule.groovy
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Copyright 2012 the original author or authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.opencord.gradle.rules
-
-import org.gradle.api.Rule
-import de.gesellix.gradle.docker.tasks.DockerTagTask
-
-
-/**
- * Gradle Rule class to tag a docker image
- */
-class DockerTagRule implements Rule {
-
-    def project
-
-    DockerTagRule(project) {
-        this.project = project
-    }
-
-    String getDescription() {
-        'Rule Usage: tag<component-name>'
-    }
-
-    void apply(String taskName) {
-        if (taskName.startsWith('tag') && !taskName.equals('tag')) {
-            project.task(taskName, type: DockerTagTask) {
-                ext.compName = taskName - 'tag'
-                def spec = project.comps[compName]
-                imageId = spec.name + '@' + spec.digest
-                tag = compName + ':' + project.targetTag
-            }
-        }
-    }
-}
diff --git a/buildSrc/src/main/groovy/org/opencord/gradle/rules/GitSubmoduleUpdateRule.groovy b/buildSrc/src/main/groovy/org/opencord/gradle/rules/GitSubmoduleUpdateRule.groovy
deleted file mode 100644
index 3b46424..0000000
--- a/buildSrc/src/main/groovy/org/opencord/gradle/rules/GitSubmoduleUpdateRule.groovy
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Copyright 2012 the original author or authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.opencord.gradle.rules
-
-import org.gradle.api.Rule
-import org.gradle.api.tasks.Exec
-
-
-/**
- * Gradle Rule class to fetch a docker image
- */
-class GitSubmoduleUpdateRule implements Rule {
-
-    def project
-
-    GitSubmoduleUpdateRule(project) {
-        this.project = project
-    }
-
-    String getDescription() {
-        'Rule Usage: gitupdate<component-name>'
-    }
-
-    void apply(String taskName) {
-        if (taskName.startsWith('gitupdate')) {
-            project.task(taskName, type: Exec) {
-                ext.compName = taskName - 'gitupdate'
-                def spec = project.comps[ext.compName]
-                workingDir = '.'
-                commandLine '/usr/bin/git', 'submodule', 'update', '--init', '--recursive', spec.componentDir
-            }
-        }
-    }
-}
diff --git a/cloudlab-openstack-playbook.yml b/cloudlab-openstack-playbook.yml
deleted file mode 100644
index c529b8a..0000000
--- a/cloudlab-openstack-playbook.yml
+++ /dev/null
@@ -1,16 +0,0 @@
----
-# Installs XOS on Cloudlab's OpenStack profile systems
-
-- hosts: cloudlab
-
-  vars:
-    xos_repo_url: "https://github.com/opencord/xos"
-    xos_repo_dest: "~/xos"
-    xos_repo_branch: "master"
-    xos_configuration: "devel"
-    xos_container_rebuild: true
-
-  roles:
-    - { role: common-prep, become: yes }
-    - xos-install
-
diff --git a/cord-diag-playbook.yml b/collect-diag-playbook.yml
similarity index 96%
rename from cord-diag-playbook.yml
rename to collect-diag-playbook.yml
index dd6721a..fb477e5 100644
--- a/cord-diag-playbook.yml
+++ b/collect-diag-playbook.yml
@@ -1,4 +1,5 @@
 ---
+# collect-diag-playbook.yml
 # Collects diagnostic information for the currently running cord-pod config
 
 - name: Create diag_dir fact
diff --git a/config/cloudlab.yml b/config/cloudlab.yml
deleted file mode 100644
index 104eaca..0000000
--- a/config/cloudlab.yml
+++ /dev/null
@@ -1,52 +0,0 @@
-# Deployment configuration for a single-node physical hardware POD
----
-seedServer:
-
-  # Put the IP of your CloudLab node here
-  ip: '128.104.222.83'
-
-  # User name and password used by Ansible to connect to the host for remote
-  # provisioning.   Put your CloudLab username here; also add your password or
-  # run ssh-agent to allow for password-less SSH login.
-  user: 'acb'
-  password: 'onos_test'
-
-  #
-  # *** For a single-node pod on CloudLab, you don't need to change anything below here ***
-  #
-
-  # Network address information for the head node:
-  #
-  # fabric_ip     - the IP address and mask bits to be used to configure the network
-  #                 interface connected to the leaf - spine fabric
-  #
-  # management_ip - the IP address and mask bits to be used to configure the network
-  #                 interface connecting the head node to the POD internal
-  #                 management network. The head node will deliver DHCP addresses to
-  #                 the other compute nodes over this interface
-  #
-  # external_ip   - the IP address and mask bits to be used to configure the network
-  #                 interface connecting the head node (and the POD) to the
-  #                 Internet. All traffic in the POD to external hosts will be
-  #                 NAT-ed through this interface
-  management_ip: '10.6.0.1/24'
-  management_iface: 'eth2'
-  external_iface: 'eth0'
-  skipTags:
-    - 'interface_config'
-    - 'switch_support'
-  extraVars:
-    - 'on_cloudlab=True'
-
-docker:
-  imageVersion: candidate
-
-otherNodes:
-  # Experimental
-  #
-  # Specifies the subnet and address range that will be used to allocate IP addresses
-  # to the compute nodes as they are deployed into the POD.
-  fabric:
-    network: 10.6.1.1/24
-    range_low: 10.6.1.2
-    range_high: 10.6.1.253
diff --git a/config/default.yml b/config/default.yml
deleted file mode 100644
index 68fe747..0000000
--- a/config/default.yml
+++ /dev/null
@@ -1,15 +0,0 @@
-# Deployment configuration for a single-node physical hardware POD
----
-seedServer:
-
-  # Put the IP of your target server here
-  ip: '1.2.3.4'
-
-  # User name and password used by Ansible to connect to the target server for remote
-  # provisioning.   You can also run ssh-agent to allow for password-less SSH login.
-  user: 'myuser'
-  password: 'cord_test'
-
-  # Uncomment if the target server is a CloudLab machine
-  #extraVars:
-  #  - 'on_cloudlab=True'
diff --git a/cord-automation.yml b/cord-automation-playbook.yml
similarity index 75%
rename from cord-automation.yml
rename to cord-automation-playbook.yml
index 234785a..556adc6 100644
--- a/cord-automation.yml
+++ b/cord-automation-playbook.yml
@@ -1,4 +1,5 @@
 ---
+# cord-automation-playbook.yml
 # Installs the automation scripts used by MaaS to provision nodes.
 
 - name: Include vars
@@ -7,9 +8,8 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
 
 - name: Set up Automated Compute Node Provisioning
   hosts: head
diff --git a/cord-compute-maas-playbook.yml b/cord-compute-maas-playbook.yml
new file mode 100644
index 0000000..043d316
--- /dev/null
+++ b/cord-compute-maas-playbook.yml
@@ -0,0 +1,44 @@
+---
+# cord-compute-maas-playbook.yml
+# Installs and configures compute nodes when using MaaS
+
+- name: Include vars
+  hosts: all
+  tasks:
+    - name: Include variables
+      include_vars: "{{ item }}"
+      with_items:
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
+
+- name: Configure compute hosts to use DNS server
+  hosts: all
+  become: yes
+  roles:
+    - { role: dns-configure, when: not on_maas }
+
+- name: Prep systems
+  hosts: compute
+  become: yes
+  roles:
+    - common-prep
+    - { role: cloudlab-prep, when: on_cloudlab }
+
+- name: Configure head node (for sshkey)
+  hosts: head
+  roles:
+    - { role: head-prep, become: yes }
+
+- name: Configure compute nodes
+  hosts: compute
+  become: yes
+  roles:
+    - compute-prep
+
+- name: Deploy compute nodes, create configuration
+  hosts: head
+  roles:
+    - juju-compute-setup
+    - compute-node-config
+    - compute-node-enable-maas
+
diff --git a/cord-compute-playbook.yml b/cord-compute-playbook.yml
index cdcd81e..2ccb405 100644
--- a/cord-compute-playbook.yml
+++ b/cord-compute-playbook.yml
@@ -1,5 +1,6 @@
 ---
-# Installs new compute nodes in cord-pod XOS configuration using Juju
+# cord-compute-playbook.yml
+# Installs and configures compute nodes
 
 - name: Include vars
   hosts: all
@@ -7,9 +8,8 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
 
 - name: Configure compute hosts to use DNS server
   hosts: all
@@ -35,8 +35,17 @@
   roles:
     - compute-prep
 
-- name: Deploy compute nodes
+- name: Deploy compute nodes, create configuration
   hosts: head
   roles:
     - juju-compute-setup
-    - xos-compute-setup
+    - compute-node-config
+
+- include: add-onboard-containers-playbook.yml
+
+- name: Enable compute nodes in XOS
+  hosts: xos_ui
+  connection: docker
+  roles:
+    - compute-node-enable
+
diff --git a/cord-deploy-xos.yml b/cord-deploy-xos.yml
deleted file mode 100644
index 29221fb..0000000
--- a/cord-deploy-xos.yml
+++ /dev/null
@@ -1,24 +0,0 @@
----
-# Deploys XOS in Docker containers on the CORD head node
-
-- name: Include vars
-  hosts: all
-  tasks:
-    - name: Include variables
-      include_vars: "{{ item }}"
-      with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
-
-- name: XOS setup
-  hosts: head
-  roles:
-    - { role: xos-build, when: xos_container_rebuild }
-    - xos-install
-
-- name: Start XOS
-  hosts: head
-  roles:
-    - xos-config
-    - xos-head-start
diff --git a/cord-fabric-pingtest.yml b/cord-fabric-pingtest.yml
index 420fde0..90dcb4f 100644
--- a/cord-fabric-pingtest.yml
+++ b/cord-fabric-pingtest.yml
@@ -8,9 +8,9 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
+        - profile_manifests/cord_defaults.yml
+        - profile_manifests/cord.yml
+        - profile_manifests/example_keystone.yml
 
 - name: Fabric ping test to gateway
   hosts: compute
diff --git a/cord-head-playbook.yml b/cord-head-playbook.yml
deleted file mode 100644
index ef1e169..0000000
--- a/cord-head-playbook.yml
+++ /dev/null
@@ -1,81 +0,0 @@
----
-# Installs the single node cord-pod XOS configuration, using Juju to provision
-# the OpenStack installation inside of VM's on the head node.
-#
-# This is used by `scripts/single-node-pod.sh` for E2E testing.
-
-- name: Include vars
-  hosts: all
-  tasks:
-    - name: Include variables
-      include_vars: "{{ item }}"
-      with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
-
-- name: DNS Server and apt-cacher-ng Setup
-  hosts: head
-  become: yes
-  roles:
-    - { role: dns-nsd, when: not on_maas }
-    - { role: dns-unbound, when: not on_maas }
-    - apt-cacher-ng
-
-- name: Configure all hosts to use DNS server
-  hosts: all
-  become: yes
-  roles:
-    - { role: dns-configure, when: not on_maas }
-
-- name: Prep systems
-  hosts: all
-  become: yes
-  roles:
-    - common-prep
-    - { role: cloudlab-prep, when: on_cloudlab }
-
-- name: Configure head node, create VM's
-  hosts: head
-  roles:
-    - { role: head-prep, become: yes }
-    - create-lxd
-
-- name: Start OpenStack install
-  hosts: head
-  roles:
-    - juju-setup
-
-- name: XOS setup
-  hosts: head
-  roles:
-    - { role: xos-build, when: xos_container_rebuild }
-    - xos-install
-
-- name: Finish OpenStack install
-  hosts: head
-  roles:
-    - juju-finish
-
-- name: Set up VMs
-  hosts: head
-  roles:
-    - onos-cord-install
-    - onos-fabric-install
-
-- name: Start ONOS and XOS
-  hosts: head
-  roles:
-    - docker-compose
-    - xos-config
-    - xos-head-start
-
-- name: Set up Automated Compute Node Provisioning
-  hosts: head
-  roles:
-    - { role: automation-integration, when: on_maas }
-
-- name: Prologue
-  hosts: head
-  roles:
-    - head-prologue
diff --git a/cord-post-deploy-playbook.yml b/cord-post-deploy-playbook.yml
deleted file mode 100644
index 343fd98..0000000
--- a/cord-post-deploy-playbook.yml
+++ /dev/null
@@ -1,31 +0,0 @@
----
-# Tests single node cord-pod XOS configuration
-
-- name: Include vars
-  hosts: all
-  tasks:
-    - name: Include variables
-      include_vars: "{{ item }}"
-      with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
-
-- name: Run platform checks
-  hosts: head
-  become: no
-  roles:
-    - platform-check
-
-- name: Create test client
-  hosts: head
-  become: yes
-  roles:
-    - maas-test-client-install
-
-- name: Run post-deploy tests
-  hosts: head
-  become: no
-  roles:
-    - test-vsg
-    - test-exampleservice
diff --git a/cord-refresh-fabric.yml b/cord-refresh-fabric.yml
index ebad1c4..6ba451b 100644
--- a/cord-refresh-fabric.yml
+++ b/cord-refresh-fabric.yml
@@ -8,9 +8,9 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
+        - profile_manifests/cord_defaults.yml
+        - profile_manifests/cord.yml
+        - profile_manifests/example_keystone.yml
 
 - name: Prep fabric on head node
   hosts: head
diff --git a/credentials/README.md b/credentials/README.md
index 7c8e469..5e507f4 100644
--- a/credentials/README.md
+++ b/credentials/README.md
@@ -1,5 +1,5 @@
-# credentials
+# Credentials
 
-This directory contains credentials autogenerated by ansible during playbook
-runs, when they aren't already defined by some other means.
+Credentials generated by ansible during runs will be stored here, if not
+defined elsewhere.
 
diff --git a/cord-deploy-onos.yml b/deploy-onos-playbook.yml
similarity index 70%
rename from cord-deploy-onos.yml
rename to deploy-onos-playbook.yml
index eef5ab4..67ff1b4 100644
--- a/cord-deploy-onos.yml
+++ b/deploy-onos-playbook.yml
@@ -1,4 +1,5 @@
 ---
+# deploy-onos-playbook.yml
 # Deploys ONOS in Docker containers on the CORD head node
 
 - name: Include vars
@@ -7,12 +8,12 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
 
 - name: Deploy and start ONOS containers
   hosts: head
   roles:
     - onos-cord-install
     - onos-fabric-install
+
diff --git a/cord-deploy-openstack.yml b/deploy-openstack-playbook.yml
similarity index 85%
rename from cord-deploy-openstack.yml
rename to deploy-openstack-playbook.yml
index 97a4efc..4c40c56 100644
--- a/cord-deploy-openstack.yml
+++ b/deploy-openstack-playbook.yml
@@ -1,4 +1,5 @@
 ---
+# deploy-openstack-playbook.yml
 # Deploys OpenStack in LXD containers on the CORD head node
 
 - name: Include vars
@@ -7,9 +8,8 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
 
 - name: Configure head node, create containers
   hosts: head
diff --git a/deploy-xos-playbook.yml b/deploy-xos-playbook.yml
new file mode 100644
index 0000000..965b2e9
--- /dev/null
+++ b/deploy-xos-playbook.yml
@@ -0,0 +1,42 @@
+---
+# deploy-xos-playbook.yml
+
+- name: Include vars
+  hosts: all
+  tasks:
+    - name: Include variables
+      include_vars: "{{ item }}"
+      with_items:
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
+
+# for docker, docker-compose
+- include: devel-tools-playbook.yml
+
+# for generating SSL certs
+- include: pki-setup-playbook.yml
+
+- name: Create CORD profile, create docker images, bootstrap XOS in docker
+  hosts: head
+  roles:
+    - cord-profile
+    - xos-docker-images
+    - xos-bootstrap
+
+- include: add-bootstrap-containers-playbook.yml
+
+- name: Onboard XOS services
+  hosts: xos_bootstrap_ui
+  connection: docker
+  roles:
+    - xos-onboarding
+
+- include: add-onboard-containers-playbook.yml
+
+- name: Check to see if XOS UI is ready, apply profile config
+  hosts: xos_ui
+  connection: docker
+  roles:
+    - xos-ready
+    - xos-config
+
diff --git a/devel-tools-playbook.yml b/devel-tools-playbook.yml
new file mode 100644
index 0000000..76655a8
--- /dev/null
+++ b/devel-tools-playbook.yml
@@ -0,0 +1,8 @@
+---
+# devel-tools-playbook.yml
+
+- name: Install developer tools
+  hosts: head
+  roles:
+    - docker-install
+
diff --git a/gradle.properties b/gradle.properties
deleted file mode 100644
index c955ae8..0000000
--- a/gradle.properties
+++ /dev/null
@@ -1,4 +0,0 @@
-org.gradle.daemon=true
-
-# Uncomment for running on CloudLab
-#deployConfig=/platform-install/config/cloudlab.yml
diff --git a/gradle/wrapper/gradle-wrapper.jar b/gradle/wrapper/gradle-wrapper.jar
deleted file mode 100644
index 2c6137b..0000000
--- a/gradle/wrapper/gradle-wrapper.jar
+++ /dev/null
Binary files differ
diff --git a/gradle/wrapper/gradle-wrapper.properties b/gradle/wrapper/gradle-wrapper.properties
deleted file mode 100644
index cf051c0..0000000
--- a/gradle/wrapper/gradle-wrapper.properties
+++ /dev/null
@@ -1,6 +0,0 @@
-#Thu May 05 16:09:18 PDT 2016
-distributionBase=GRADLE_USER_HOME
-distributionPath=wrapper/dists
-zipStoreBase=GRADLE_USER_HOME
-zipStorePath=wrapper/dists
-distributionUrl=https\://services.gradle.org/distributions/gradle-2.13-all.zip
diff --git a/gradlew b/gradlew
deleted file mode 100755
index 9d82f78..0000000
--- a/gradlew
+++ /dev/null
@@ -1,160 +0,0 @@
-#!/usr/bin/env bash
-
-##############################################################################
-##
-##  Gradle start up script for UN*X
-##
-##############################################################################
-
-# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
-DEFAULT_JVM_OPTS=""
-
-APP_NAME="Gradle"
-APP_BASE_NAME=`basename "$0"`
-
-# Use the maximum available, or set MAX_FD != -1 to use that value.
-MAX_FD="maximum"
-
-warn ( ) {
-    echo "$*"
-}
-
-die ( ) {
-    echo
-    echo "$*"
-    echo
-    exit 1
-}
-
-# OS specific support (must be 'true' or 'false').
-cygwin=false
-msys=false
-darwin=false
-case "`uname`" in
-  CYGWIN* )
-    cygwin=true
-    ;;
-  Darwin* )
-    darwin=true
-    ;;
-  MINGW* )
-    msys=true
-    ;;
-esac
-
-# Attempt to set APP_HOME
-# Resolve links: $0 may be a link
-PRG="$0"
-# Need this for relative symlinks.
-while [ -h "$PRG" ] ; do
-    ls=`ls -ld "$PRG"`
-    link=`expr "$ls" : '.*-> \(.*\)$'`
-    if expr "$link" : '/.*' > /dev/null; then
-        PRG="$link"
-    else
-        PRG=`dirname "$PRG"`"/$link"
-    fi
-done
-SAVED="`pwd`"
-cd "`dirname \"$PRG\"`/" >/dev/null
-APP_HOME="`pwd -P`"
-cd "$SAVED" >/dev/null
-
-CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
-
-# Determine the Java command to use to start the JVM.
-if [ -n "$JAVA_HOME" ] ; then
-    if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
-        # IBM's JDK on AIX uses strange locations for the executables
-        JAVACMD="$JAVA_HOME/jre/sh/java"
-    else
-        JAVACMD="$JAVA_HOME/bin/java"
-    fi
-    if [ ! -x "$JAVACMD" ] ; then
-        die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
-
-Please set the JAVA_HOME variable in your environment to match the
-location of your Java installation."
-    fi
-else
-    JAVACMD="java"
-    which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-
-Please set the JAVA_HOME variable in your environment to match the
-location of your Java installation."
-fi
-
-# Increase the maximum file descriptors if we can.
-if [ "$cygwin" = "false" -a "$darwin" = "false" ] ; then
-    MAX_FD_LIMIT=`ulimit -H -n`
-    if [ $? -eq 0 ] ; then
-        if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then
-            MAX_FD="$MAX_FD_LIMIT"
-        fi
-        ulimit -n $MAX_FD
-        if [ $? -ne 0 ] ; then
-            warn "Could not set maximum file descriptor limit: $MAX_FD"
-        fi
-    else
-        warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
-    fi
-fi
-
-# For Darwin, add options to specify how the application appears in the dock
-if $darwin; then
-    GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
-fi
-
-# For Cygwin, switch paths to Windows format before running java
-if $cygwin ; then
-    APP_HOME=`cygpath --path --mixed "$APP_HOME"`
-    CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
-    JAVACMD=`cygpath --unix "$JAVACMD"`
-
-    # We build the pattern for arguments to be converted via cygpath
-    ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
-    SEP=""
-    for dir in $ROOTDIRSRAW ; do
-        ROOTDIRS="$ROOTDIRS$SEP$dir"
-        SEP="|"
-    done
-    OURCYGPATTERN="(^($ROOTDIRS))"
-    # Add a user-defined pattern to the cygpath arguments
-    if [ "$GRADLE_CYGPATTERN" != "" ] ; then
-        OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
-    fi
-    # Now convert the arguments - kludge to limit ourselves to /bin/sh
-    i=0
-    for arg in "$@" ; do
-        CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -`
-        CHECK2=`echo "$arg"|egrep -c "^-"`                                 ### Determine if an option
-
-        if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then                    ### Added a condition
-            eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"`
-        else
-            eval `echo args$i`="\"$arg\""
-        fi
-        i=$((i+1))
-    done
-    case $i in
-        (0) set -- ;;
-        (1) set -- "$args0" ;;
-        (2) set -- "$args0" "$args1" ;;
-        (3) set -- "$args0" "$args1" "$args2" ;;
-        (4) set -- "$args0" "$args1" "$args2" "$args3" ;;
-        (5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
-        (6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
-        (7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
-        (8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
-        (9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
-    esac
-fi
-
-# Split up the JVM_OPTS And GRADLE_OPTS values into an array, following the shell quoting and substitution rules
-function splitJvmOpts() {
-    JVM_OPTS=("$@")
-}
-eval splitJvmOpts $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS
-JVM_OPTS[${#JVM_OPTS[*]}]="-Dorg.gradle.appname=$APP_BASE_NAME"
-
-exec "$JAVACMD" "${JVM_OPTS[@]}" -classpath "$CLASSPATH" org.gradle.wrapper.GradleWrapperMain "$@"
diff --git a/gradlew.bat b/gradlew.bat
deleted file mode 100644
index 72d362d..0000000
--- a/gradlew.bat
+++ /dev/null
@@ -1,90 +0,0 @@
-@if "%DEBUG%" == "" @echo off

-@rem ##########################################################################

-@rem

-@rem  Gradle startup script for Windows

-@rem

-@rem ##########################################################################

-

-@rem Set local scope for the variables with windows NT shell

-if "%OS%"=="Windows_NT" setlocal

-

-@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.

-set DEFAULT_JVM_OPTS=

-

-set DIRNAME=%~dp0

-if "%DIRNAME%" == "" set DIRNAME=.

-set APP_BASE_NAME=%~n0

-set APP_HOME=%DIRNAME%

-

-@rem Find java.exe

-if defined JAVA_HOME goto findJavaFromJavaHome

-

-set JAVA_EXE=java.exe

-%JAVA_EXE% -version >NUL 2>&1

-if "%ERRORLEVEL%" == "0" goto init

-

-echo.

-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.

-echo.

-echo Please set the JAVA_HOME variable in your environment to match the

-echo location of your Java installation.

-

-goto fail

-

-:findJavaFromJavaHome

-set JAVA_HOME=%JAVA_HOME:"=%

-set JAVA_EXE=%JAVA_HOME%/bin/java.exe

-

-if exist "%JAVA_EXE%" goto init

-

-echo.

-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%

-echo.

-echo Please set the JAVA_HOME variable in your environment to match the

-echo location of your Java installation.

-

-goto fail

-

-:init

-@rem Get command-line arguments, handling Windows variants

-

-if not "%OS%" == "Windows_NT" goto win9xME_args

-if "%@eval[2+2]" == "4" goto 4NT_args

-

-:win9xME_args

-@rem Slurp the command line arguments.

-set CMD_LINE_ARGS=

-set _SKIP=2

-

-:win9xME_args_slurp

-if "x%~1" == "x" goto execute

-

-set CMD_LINE_ARGS=%*

-goto execute

-

-:4NT_args

-@rem Get arguments from the 4NT Shell from JP Software

-set CMD_LINE_ARGS=%$

-

-:execute

-@rem Setup the command line

-

-set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar

-

-@rem Execute Gradle

-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%

-

-:end

-@rem End local scope for the variables with windows NT shell

-if "%ERRORLEVEL%"=="0" goto mainEnd

-

-:fail

-rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of

-rem the _cmd.exe /c_ return code!

-if  not "" == "%GRADLE_EXIT_CONSOLE%" exit 1

-exit /b 1

-

-:mainEnd

-if "%OS%"=="Windows_NT" endlocal

-

-:omega

diff --git a/group_vars/arizona.yml b/group_vars/arizona.yml
deleted file mode 100644
index d945d36..0000000
--- a/group_vars/arizona.yml
+++ /dev/null
@@ -1,5 +0,0 @@
----
-# file: group_vars/arizona.yml
-
-mgmt_net_prefix: 192.168.102
-cloudlab: false
diff --git a/group_vars/cloudlab.yml b/group_vars/cloudlab.yml
deleted file mode 100644
index 340ac15..0000000
--- a/group_vars/cloudlab.yml
+++ /dev/null
@@ -1,6 +0,0 @@
----
-# file: group_vars/cloudlab.yml
-
-mgmt_net_prefix: 192.168.100
-cloudlab: true
-
diff --git a/group_vars/cord-test.yml b/group_vars/cord-test.yml
deleted file mode 100644
index 9836d41..0000000
--- a/group_vars/cord-test.yml
+++ /dev/null
@@ -1,4 +0,0 @@
----
-# file: group_vars/cord-test.yml
-
-
diff --git a/group_vars/cord.yml b/group_vars/cord.yml
deleted file mode 100644
index 3b1cf22..0000000
--- a/group_vars/cord.yml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-# file: group_vars/cord.yml
-
diff --git a/group_vars/princeton.yml b/group_vars/princeton.yml
deleted file mode 100644
index 7c1e6ba..0000000
--- a/group_vars/princeton.yml
+++ /dev/null
@@ -1,6 +0,0 @@
----
-# file: group_vars/princeton.yml
-
-mgmt_net_prefix: 192.168.100
-cloudlab: false
-
diff --git a/group_vars/singapore.yml b/group_vars/singapore.yml
deleted file mode 100644
index 85ba602..0000000
--- a/group_vars/singapore.yml
+++ /dev/null
@@ -1,5 +0,0 @@
----
-# file: group_vars/singapore.yml
-
-mgmt_net_prefix: 192.168.103
-cloudlab: false
diff --git a/group_vars/stanford.yml b/group_vars/stanford.yml
deleted file mode 100644
index fee7f1c..0000000
--- a/group_vars/stanford.yml
+++ /dev/null
@@ -1,6 +0,0 @@
----
-# file: group_vars/stanford.yml
-
-mgmt_net_prefix: 192.168.101
-cloudlab: false
-
diff --git a/inventory/api-test b/inventory/api-test
new file mode 100644
index 0000000..42f415b
--- /dev/null
+++ b/inventory/api-test
@@ -0,0 +1,8 @@
+; api-test configuration
+
+[all:vars]
+cord_profile=api-test
+
+[head]
+localhost ansible_connection=local
+
diff --git a/inventory/devstack b/inventory/devstack
new file mode 100644
index 0000000..717a046
--- /dev/null
+++ b/inventory/devstack
@@ -0,0 +1,8 @@
+; devstack configuration
+
+[all:vars]
+cord_profile=devstack
+
+[head]
+localhost ansible_connection=local
+
diff --git a/inventory/frontend b/inventory/frontend
new file mode 100644
index 0000000..0f6abe8
--- /dev/null
+++ b/inventory/frontend
@@ -0,0 +1,8 @@
+; frontend configuration
+
+[all:vars]
+cord_profile=frontend
+
+[head]
+localhost ansible_connection=local
+
diff --git a/inventory/arizona b/inventory/legacy/arizona
similarity index 100%
rename from inventory/arizona
rename to inventory/legacy/arizona
diff --git a/inventory/aztest b/inventory/legacy/aztest
similarity index 100%
rename from inventory/aztest
rename to inventory/legacy/aztest
diff --git a/inventory/cloudlab b/inventory/legacy/cloudlab
similarity index 100%
rename from inventory/cloudlab
rename to inventory/legacy/cloudlab
diff --git a/inventory/cloudlab-openstack b/inventory/legacy/cloudlab-openstack
similarity index 100%
rename from inventory/cloudlab-openstack
rename to inventory/legacy/cloudlab-openstack
diff --git a/inventory/cord b/inventory/legacy/cord
similarity index 100%
rename from inventory/cord
rename to inventory/legacy/cord
diff --git a/inventory/cord-test b/inventory/legacy/cord-test
similarity index 100%
rename from inventory/cord-test
rename to inventory/legacy/cord-test
diff --git a/inventory/hawaii b/inventory/legacy/hawaii
similarity index 100%
rename from inventory/hawaii
rename to inventory/legacy/hawaii
diff --git a/inventory/multi-localhost b/inventory/legacy/multi-localhost
similarity index 100%
rename from inventory/multi-localhost
rename to inventory/legacy/multi-localhost
diff --git a/inventory/princeton b/inventory/legacy/princeton
similarity index 100%
rename from inventory/princeton
rename to inventory/legacy/princeton
diff --git a/inventory/singapore b/inventory/legacy/singapore
similarity index 100%
rename from inventory/singapore
rename to inventory/legacy/singapore
diff --git a/inventory/single-localhost b/inventory/legacy/single-localhost
similarity index 100%
rename from inventory/single-localhost
rename to inventory/legacy/single-localhost
diff --git a/inventory/stanford b/inventory/legacy/stanford
similarity index 100%
rename from inventory/stanford
rename to inventory/legacy/stanford
diff --git a/inventory/unc b/inventory/legacy/unc
similarity index 100%
rename from inventory/unc
rename to inventory/legacy/unc
diff --git a/inventory/localhost b/inventory/localhost
new file mode 100644
index 0000000..3dcd971
--- /dev/null
+++ b/inventory/localhost
@@ -0,0 +1,6 @@
+; localhost configuration
+; used for installing prereqs with cord-bootstrap.sh script
+
+[head]
+localhost ansible_connection=local
+
diff --git a/inventory/mock-mcord b/inventory/mock-mcord
new file mode 100644
index 0000000..bc2d4d1
--- /dev/null
+++ b/inventory/mock-mcord
@@ -0,0 +1,8 @@
+; mock-mcord configuration
+
+[all:vars]
+cord_profile=mock-mcord
+
+[head]
+localhost ansible_connection=local
+
diff --git a/inventory/mock-rcord b/inventory/mock-rcord
new file mode 100644
index 0000000..69eecee
--- /dev/null
+++ b/inventory/mock-rcord
@@ -0,0 +1,8 @@
+; mock-rcord configuration
+
+[all:vars]
+cord_profile=mock-rcord
+
+[head]
+localhost ansible_connection=local
+
diff --git a/inventory/rcord b/inventory/rcord
new file mode 100644
index 0000000..92adce2
--- /dev/null
+++ b/inventory/rcord
@@ -0,0 +1,8 @@
+; rcord configuration
+
+[all:vars]
+cord_profile=rcord
+
+[head]
+localhost ansible_connection=local
+
diff --git a/library/xostosca.py b/library/xostosca.py
new file mode 100644
index 0000000..14619fe
--- /dev/null
+++ b/library/xostosca.py
@@ -0,0 +1,40 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+
+def main():
+
+    # args styled after the uri module
+    module = AnsibleModule(
+        argument_spec = dict(
+            url       = dict(required=True, type='str'),
+            recipe    = dict(required=True, type='str'),
+            user      = dict(required=True, type='str'),
+            password  = dict(required=True, type='str'),
+        )
+    )
+
+    xos_auth=(module.params['user'], module.params['password'])
+
+    r = requests.post(module.params['url'], data={"recipe": module.params['recipe']}, auth=xos_auth)
+    if (r.status_code != 200):
+        try:
+            error_text=r.json()["error_text"]
+        except:
+            error_text="error while formatting the error: " + traceback.format_exc()
+        module.fail_json(msg=error_text, rc=r.status_code)
+
+    result = r.json()
+    if "log_msgs" in result:
+        module.exit_json(changed=True, msg="\n".join(result["log_msgs"])+"\n")
+    else:
+        module.exit_json(changed=True, msg="success")
+
+if __name__ == '__main__':
+    main()
diff --git a/onboard-exampleservice-playbook.yml b/onboard-exampleservice-playbook.yml
new file mode 100644
index 0000000..2ca154f
--- /dev/null
+++ b/onboard-exampleservice-playbook.yml
@@ -0,0 +1,35 @@
+---
+# onboard-exampleservice.yml
+# Adds the exampleservice to the currently running pod
+
+- name: Include vars
+  hosts: all
+  tasks:
+    - name: Include variables
+      include_vars: "{{ item }}"
+      with_items:
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
+
+- name: Create exampleservice config
+  hosts: head
+  roles:
+    - exampleservice-config
+
+- include: add-bootstrap-containers-playbook.yml
+
+- name: Onboard exampleservice services
+  hosts: xos_bootstrap_ui
+  connection: docker
+  roles:
+    - exampleservice-onboard
+
+- include: add-onboard-containers-playbook.yml
+
+- name: Check to see if XOS UI is ready
+  hosts: xos_ui
+  connection: docker
+  roles:
+    - xos-ready
+
+
diff --git a/opencloud-multi-playbook.yml b/opencloud-multi-playbook.yml
index b782476..931d6cb 100644
--- a/opencloud-multi-playbook.yml
+++ b/opencloud-multi-playbook.yml
@@ -1,4 +1,5 @@
 ---
+# opencloud-multi-playbook.yml
 # Install an OpenCloud site, with multi-node Juju configured OpenStack
 
 - name: Include vars
@@ -7,9 +8,8 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/opencloud_defaults.yml
-        - vars/aztest.yml
-        - vars/aztest_keystone.yml
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
 
 - name: Turn on virtualization
   hosts: all
@@ -42,14 +42,16 @@
     - { role: head-prep, become: yes }
     - { role: config-virt, become: yes }
 
-- name: Create VM's, Configure Juju, install XOS
+- name: Create LXD's, Configure Juju, install XOS
   hosts: head
   roles:
     - create-lxd
-    - create-vms
-    - xos-install
     - onos-vm-install
     - juju-setup
+    - juju-finish
+    - cord-profile
+    - xos-docker-images
+    - xos-bootstrap
     - docker-compose
     - xos-head-start
 
diff --git a/pki-setup.yml b/pki-setup-playbook.yml
similarity index 78%
rename from pki-setup.yml
rename to pki-setup-playbook.yml
index f8d24c6..1e4eb02 100644
--- a/pki-setup.yml
+++ b/pki-setup-playbook.yml
@@ -8,8 +8,8 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - "vars/{{ cord_profile }}.yml"
-        - vars/local_vars.yml
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
 
 - name: Create Root CA, Intermediate CA, Server certs
   hosts: localhost
diff --git a/pod-test-playbook.yml b/pod-test-playbook.yml
new file mode 100644
index 0000000..6237463
--- /dev/null
+++ b/pod-test-playbook.yml
@@ -0,0 +1,52 @@
+---
+# pod-test-playbook.yml
+# Tests CiaB cord-pod XOS configuration
+
+- name: Include vars
+  hosts: all
+  tasks:
+    - name: Include variables
+      include_vars: "{{ item }}"
+      with_items:
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
+
+# - name: Run platform checks
+#   hosts: head
+#   become: no
+#   roles:
+#    - platform-check
+
+- name: Create test client
+  hosts: head
+  become: yes
+  roles:
+    - maas-test-client-install
+
+- name: Create test subscriber
+  hosts: head
+  roles:
+    - test-subscriber-config
+
+- include: add-onboard-containers-playbook.yml
+
+- name: Enable the test subscriber
+  hosts: xos_ui
+  connection: docker
+  roles:
+    - test-subscriber-enable
+
+- name: Test VSG
+  hosts: head
+  become: no
+  roles:
+    - test-vsg
+
+- include: onboard-exampleservice-playbook.yml
+
+- name: Test ExampleService
+  hosts: head
+  become: no
+  roles:
+    - test-exampleservice
+
diff --git a/cord-prep-platform.yml b/prep-platform-playbook.yml
similarity index 83%
rename from cord-prep-platform.yml
rename to prep-platform-playbook.yml
index 70313f1..8661208 100644
--- a/cord-prep-platform.yml
+++ b/prep-platform-playbook.yml
@@ -1,4 +1,5 @@
 ---
+# cord-prep-platform.yml
 # Prepares the CORD head node for installing OpenStack, ONOS, and XOS
 
 - name: Include vars
@@ -7,9 +8,8 @@
     - name: Include variables
       include_vars: "{{ item }}"
       with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
 
 - name: DNS Server and apt-cacher-ng Setup
   hosts: head
@@ -31,3 +31,4 @@
   roles:
     - common-prep
     - { role: cloudlab-prep, when: on_cloudlab }
+
diff --git a/profile_manifests/api-test.yml b/profile_manifests/api-test.yml
new file mode 100644
index 0000000..cba05fe
--- /dev/null
+++ b/profile_manifests/api-test.yml
@@ -0,0 +1,63 @@
+---
+# vars/api-test.yaml
+
+site_name: mysite
+deployment_type: MyDeployment
+
+frontend_only: True
+use_openstack: False
+use_vtn: False
+
+build_xos_base_image: True
+build_xos_test_image: True
+
+source_ui_image: "xosproject/xos-test"
+
+xos_admin_user: padmin@vicci.org
+xos_admin_pass: letmein
+xos_admin_first: XOS
+xos_admin_last: admin
+
+xos_tosca_config_templates:
+  - management-net.yaml
+  - sample.yaml
+  - services.yaml
+  - volt-devices.yaml
+
+# paths relative to repo checkout, defined in manifest/default.xml
+xos_services:
+  - name: volt
+    path: onos-apps/apps/olt
+  - name: onos
+    path: orchestration/xos_services/onos-service
+  - name: vrouter
+    path: orchestration/xos_services/vrouter
+  - name: vsg
+    path: orchestration/xos_services/vsg
+  - name: vtr
+    path: orchestration/xos_services/vtr
+
+xos_service_sshkeys:
+  - name: onos_rsa
+    source_path: "/dev/null"
+  - name: onos_rsa.pub
+    source_path: "/dev/null"
+  - name: volt_rsa
+    source_path: "/dev/null"
+  - name: volt_rsa.pub
+    source_path: "/dev/null"
+  - name: vsg_rsa
+    source_path: "/dev/null"
+  - name: vsg_rsa.pub
+    source_path: "/dev/null"
+
+# site domain suffix
+site_suffix: opencloud.us
+
+# SSL server certificate generation
+server_certs:
+  - cn: "xos-core.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=xos-core.{{ site_suffix }}"
+    altnames:
+      - "DNS:xos-core.{{ site_suffix }}"
+
diff --git a/profile_manifests/frontend.yml b/profile_manifests/frontend.yml
new file mode 100644
index 0000000..3afea33
--- /dev/null
+++ b/profile_manifests/frontend.yml
@@ -0,0 +1,30 @@
+---
+# vars/frontend.yaml
+
+site_name: frontend
+deployment_type: "Frontend Mock"
+
+frontend_only: True
+use_openstack: False
+use_vtn: False
+
+build_xos_base_image: True
+
+xos_admin_user: xosadmin@opencord.org
+xos_admin_pass: "{{ lookup('password', 'credentials/xosadmin@opencord.org chars=ascii_letters,digits') }}"
+xos_admin_first: XOS
+xos_admin_last: admin
+
+xos_tosca_config_templates:
+  - sample.yaml
+
+# site domain suffix
+site_suffix: opencloud.us
+
+# SSL server certificate generation
+server_certs:
+  - cn: "xos-core.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=xos-core.{{ site_suffix }}"
+    altnames:
+      - "DNS:xos-core.{{ site_suffix }}"
+
diff --git a/vars/local_vars.yml b/profile_manifests/local_vars.yml
similarity index 100%
rename from vars/local_vars.yml
rename to profile_manifests/local_vars.yml
diff --git a/profile_manifests/mock-mcord.yml b/profile_manifests/mock-mcord.yml
new file mode 100644
index 0000000..15634f3
--- /dev/null
+++ b/profile_manifests/mock-mcord.yml
@@ -0,0 +1,40 @@
+---
+# vars/mock-mcord.yaml
+# creates a mock MCORD pod
+
+site_name: mock-mcord
+deployment_type: "Mock M-CORD Pod"
+
+xos_admin_user: xosadmin@opencord.org
+xos_admin_pass: "{{ lookup('password', 'credentials/xosadmin@opencord.org chars=ascii_letters,digits') }}"
+xos_admin_first: XOS
+xos_admin_last: admin
+
+frontend_only: True
+use_openstack: False
+use_vtn: False
+
+build_xos_base_image: True
+
+xos_tosca_config_templates:
+  - sample.yaml
+  - management-net.yaml
+  - mock-mcord.yaml
+
+# GUI branding
+gui_branding_name: "M-CORD"
+gui_branding_icon: "/static/cord-logo.png"
+gui_branding_favicon: "/static/cord-favicon.png"
+gui_branding_bg: "/static/mcord-bg.jpg"
+gui_service_view_class: "core.views.mCordServiceGrid.ServiceGridView"
+
+# site domain suffix
+site_suffix: opencloud.us
+
+# SSL server certificate generation
+server_certs:
+  - cn: "xos-core.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=xos-core.{{ site_suffix }}"
+    altnames:
+      - "DNS:xos-core.{{ site_suffix }}"
+
diff --git a/profile_manifests/mock-rcord.yml b/profile_manifests/mock-rcord.yml
new file mode 100644
index 0000000..3ca43a4
--- /dev/null
+++ b/profile_manifests/mock-rcord.yml
@@ -0,0 +1,72 @@
+---
+# vars/frontend-rcord.yaml
+# creates a mock R-CORD pod
+
+site_name: mock-rcord
+deployment_type: "Mock R-CORD Pod"
+
+xos_admin_user: xosadmin@opencord.org
+xos_admin_pass: "{{ lookup('password', 'credentials/xosadmin@opencord.org chars=ascii_letters,digits') }}"
+xos_admin_first: XOS
+xos_admin_last: admin
+
+frontend_only: True
+use_openstack: False
+use_vtn: False
+
+build_xos_base_image: True
+
+xos_tosca_config_templates:
+  - sample.yaml
+  - management-net.yaml
+  - mock-onos.yaml
+  - cord-services.yaml
+  - public-net.yaml
+    #  - test-subscriber.yaml # broken? Missing lan_network config on vOLT ?
+  - volt-devices.yaml
+
+# GUI branding
+gui_branding_name: "CORD"
+gui_branding_icon: "/static/cord-logo.png"
+gui_branding_favicon: "/static/cord-favicon.png"
+gui_branding_bg: "/static/cord-bg.jpg"
+
+# paths defined in manifest/default.xml
+xos_services:
+  - name: volt
+    path: onos-apps/apps/olt
+  - name: onos
+    path: orchestration/xos_services/onos-service
+  - name: vrouter
+    path: orchestration/xos_services/vrouter
+  - name: vsg
+    path: orchestration/xos_services/vsg
+  - name: vtr
+    path: orchestration/xos_services/vtr
+  - name: fabric
+    path: orchestration/xos_services/fabric
+
+xos_service_sshkeys:
+  - name: onos_rsa
+    source_path: "/dev/null"
+  - name: onos_rsa.pub
+    source_path: "/dev/null"
+  - name: volt_rsa
+    source_path: "/dev/null"
+  - name: volt_rsa.pub
+    source_path: "/dev/null"
+  - name: vsg_rsa
+    source_path: "/dev/null"
+  - name: vsg_rsa.pub
+    source_path: "/dev/null"
+
+# site domain suffix
+site_suffix: opencloud.us
+
+# SSL server certificate generation
+server_certs:
+  - cn: "xos-core.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=xos-core.{{ site_suffix }}"
+    altnames:
+      - "DNS:xos-core.{{ site_suffix }}"
+
diff --git a/profile_manifests/opencloud.yml b/profile_manifests/opencloud.yml
new file mode 100644
index 0000000..54b8584
--- /dev/null
+++ b/profile_manifests/opencloud.yml
@@ -0,0 +1,267 @@
+---
+# vars/opencloud.yaml
+# Generic OpenCloud Site
+
+# site configuration
+site_name: generic_opencloud
+site_humanname: "Generic OpenCloud"
+deployment_type: campus
+
+xos_admin_user: xosadmin@opencord.org
+xos_admin_pass: "{{ lookup('password', 'credentials/xosadmin@opencord.org chars=ascii_letters,digits') }}"
+xos_admin_first: XOS
+xos_admin_last: Admin
+
+xos_users: []
+
+use_vtn: True
+
+xos_tosca_config_templates:
+  - openstack.yaml
+  - nodes.yaml
+  - vtn-service.yaml
+  - management-net.yaml
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+xos_docker_volumes:
+  - host: "{{ cord_profile_dir }}/images"
+    container: /opt/xos/images
+
+# GUI Branding
+# Not neeeded, default is OpenCloud
+
+# paths defined in manifest/default.xml
+xos_services:
+  - name: vtn
+    path: onos-apps/apps/vtn
+  - name: onos
+    path: orchestration/xos_services/onos-service
+  - name: vrouter
+    path: orchestration/xos_services/vrouter
+
+xos_service_sshkeys:
+  - name: onos_rsa
+    source_path: "~/.ssh/id_rsa"
+  - name: onos_rsa.pub
+    source_path: "~/.ssh/id_rsa.pub"
+
+
+# IP prefix for VMs
+virt_nets:
+  - name: mgmtbr
+    ipv4_prefix: 192.168.250
+    head_vms: true
+
+# DNS/domain settings
+site_suffix: generic.infra.opencloud.us
+
+dns_search:
+  - "{{ site_suffix }}"
+
+# SSL server certificate generation
+server_certs:
+  - cn: "keystone.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=keystone.{{ site_suffix }}"
+    altnames:
+      - "DNS:keystone.{{ site_suffix }}"
+      - "DNS:keystone"
+  - cn: "xos-core.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=xos-core.{{ site_suffix }}"
+    altnames:
+      - "DNS:xos-core.{{ site_suffix }}"
+
+# NSD/Unbound settings
+nsd_zones:
+  - name: "{{ site_suffix }}"
+    ipv4_first_octets: 192.168.250
+    name_reverse_unbound: "168.192.in-addr.arpa"
+    soa: ns1
+    ns:
+      - { name: ns1 }
+    nodelist: head_vm_list
+    aliases:
+      - { name: "ns1" , dest: "head" }
+      - { name: "ns" , dest: "head" }
+      - { name: "apt-cache" , dest: "head" }
+
+name_on_public_interface: head
+
+# If true, unbound listens on the head node's `ansible_default_ipv4` interface
+unbound_listen_on_default: True
+
+# VTN network configuration
+management_network_cidr: 172.27.0.0/24
+management_network_ip: 172.27.0.1/24
+data_plane_ip: 10.168.0.253/24
+
+on_maas: False
+
+run_dist_upgrade: True
+
+openstack_version: kilo
+
+juju_config_name: opencloud
+juju_config_path: /usr/local/src/juju_config.yml
+
+keystone_admin_password: "{{ lookup('password', 'credentials/generic_opencloud_keystone_admin chars=ascii_letters,digits') }}"
+
+deployment_flavors:
+  - m1.small
+  - m1.medium
+  - m1.large
+  - m1.xlarge
+
+apt_cacher_name: apt-cache
+
+apt_ssl_sites:
+  - apt.dockerproject.org
+  - butler.opencloud.cs.arizona.edu
+  - deb.nodesource.com
+
+charm_versions:
+  neutron-api: "cs:~cordteam/trusty/neutron-api-3"
+  nova-compute: "cs:~cordteam/trusty/nova-compute-2"
+
+head_vm_list: []
+
+head_lxd_list:
+  - name: "juju-1"
+    service: "juju"
+    aliases:
+       - "juju"
+    ipv4_last_octet: 10
+
+  - name: "ceilometer-1"
+    service: "ceilometer"
+    aliases:
+      - "ceilometer"
+    ipv4_last_octet: 20
+    forwarded_ports:
+      - { ext: 8777, int: 8777 }
+
+  - name: "glance-1"
+    service: "glance"
+    aliases:
+      - "glance"
+    ipv4_last_octet: 30
+    forwarded_ports:
+      - { ext: 9292, int: 9292 }
+
+  - name: "keystone-1"
+    service: "keystone"
+    aliases:
+      - "keystone"
+    ipv4_last_octet: 40
+    forwarded_ports:
+      - { ext: 35357, int: 35357 }
+      - { ext: 4990, int: 4990 }
+      - { ext: 5000, int: 5000 }
+
+  - name: "percona-cluster-1"
+    service: "percona-cluster"
+    aliases:
+      - "percona-cluster"
+    ipv4_last_octet: 50
+
+  - name: "neutron-api-1"
+    service: "neutron-api"
+    aliases:
+      - "neutron-api"
+    ipv4_last_octet: 70
+    forwarded_ports:
+      - { ext: 9696, int: 9696 }
+
+  - name: "nova-cloud-controller-1"
+    service: "nova-cloud-controller"
+    aliases:
+      - "nova-cloud-controller"
+    ipv4_last_octet: 90
+    forwarded_ports:
+      - { ext: 8774, int: 8774 }
+
+  - name: "openstack-dashboard-1"
+    service: "openstack-dashboard"
+    aliases:
+      - "openstack-dashboard"
+    ipv4_last_octet: 100
+    forwarded_ports:
+      - { ext: 8080, int: 80 }
+
+  - name: "rabbitmq-server-1"
+    service: "rabbitmq-server"
+    aliases:
+      - "rabbitmq-server"
+    ipv4_last_octet: 110
+
+  - name: "onos-cord-1"
+    aliases:
+      - "onos-cord"
+    ipv4_last_octet: 110
+    docker_path: "cord"
+
+  - name: "xos-1"
+    aliases:
+      - "xos"
+    ipv4_last_octet: 130
+    docker_path: 'service-profile/opencloud'
+
+lxd_service_list:
+  - ceilometer
+  - glance
+  - keystone
+  - neutron-api
+  - nova-cloud-controller
+  - openstack-dashboard
+  - percona-cluster
+  - rabbitmq-server
+
+standalone_service_list:
+  - ceilometer-agent
+  - ntp
+
+service_relations:
+  - name: keystone
+    relations: [ "percona-cluster", ]
+
+  - name: nova-cloud-controller
+    relations: [ "percona-cluster", "rabbitmq-server", "glance", "keystone", ]
+
+  - name: glance
+    relations: [ "percona-cluster", "keystone", ]
+
+  - name: neutron-api
+    relations: [ "keystone", "percona-cluster", "rabbitmq-server", "nova-cloud-controller", ]
+
+  - name: openstack-dashboard
+    relations: [ "keystone", ]
+
+  - name: ceilometer
+    relations: [ "mongodb", "rabbitmq-server" ]
+
+  - name: "ceilometer:identity-service"
+    relations: [ "keystone:identity-service", ]
+
+  - name: "ceilometer:ceilometer-service"
+    relations: [ "ceilometer-agent:ceilometer-service", ]
+
+
+compute_relations:
+  - name: nova-compute
+    relations: [ "ceilometer-agent", "glance", "nova-cloud-controller", ]
+
+  - name: "nova-compute:shared-db"
+    relations: [ "percona-cluster:shared-db", ]
+
+  - name: "nova-compute:amqp"
+    relations: [ "rabbitmq-server:amqp", ]
+
+  - name: ntp
+    relations: [ "nova-compute", ]
+
+
+xos_images:
+  - name: "trusty-server-multi-nic"
+    url: "http://www.vicci.org/opencloud/trusty-server-cloudimg-amd64-disk1.img"
+    checksum: "sha256:c2d0ffc937aeb96016164881052a496658efeb98959dc68e73d9895c5d9920f7"
+
diff --git a/profile_manifests/rcord.yml b/profile_manifests/rcord.yml
new file mode 100644
index 0000000..07d584c
--- /dev/null
+++ b/profile_manifests/rcord.yml
@@ -0,0 +1,354 @@
+---
+# vars/cord-pod.yaml
+# Configures an R-CORD pod
+
+# site configuration
+site_name: mysite
+site_humanname: MySite
+deployment_type: MyDeployment
+
+xos_admin_user: xosadmin@opencord.org
+xos_admin_pass: "{{ lookup('password', 'credentials/xosadmin@opencord.org chars=ascii_letters,digits') }}"
+xos_admin_first: XOS
+xos_admin_last: Admin
+
+xos_users: []
+
+use_vtn: True
+
+xos_tosca_config_templates:
+  - openstack.yaml
+  - vtn-service.yaml
+  - fabric-service.yaml
+  - management-net.yaml
+  - cord-services.yaml  # should unify this with services.yaml.j2 eventually
+  - public-net.yaml
+  - volt-devices.yaml
+  - vrouter.yaml
+
+xos_other_templates:
+  - fabric-network-cfg.json
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+xos_docker_volumes:
+  - host: "{{ cord_profile_dir }}/images"
+    container: /opt/xos/images
+
+# GUI branding
+gui_branding_name: "CORD"
+gui_branding_icon: "/static/cord-logo.png"
+gui_branding_favicon: "/static/cord-favicon.png"
+gui_branding_bg: "/static/cord-bg.jpg"
+
+# paths defined in manifest/default.xml
+xos_services:
+  - name: volt
+    path: onos-apps/apps/olt
+  - name: vtn
+    path: onos-apps/apps/vtn
+  - name: openstack
+    path: orchestration/xos_services/openstack
+  - name: onos
+    path: orchestration/xos_services/onos-service
+  - name: vrouter
+    path: orchestration/xos_services/vrouter
+  - name: vsg
+    path: orchestration/xos_services/vsg
+  - name: vtr
+    path: orchestration/xos_services/vtr
+  - name: fabric
+    path: orchestration/xos_services/fabric
+# needed onboarding synchronizer doesn't require service code to be present when started
+  - name: exampleservice
+    path: orchestration/xos_services/exampleservice
+
+xos_service_sshkeys:
+  - name: onos_rsa
+    source_path: "~/.ssh/id_rsa"
+  - name: onos_rsa.pub
+    source_path: "~/.ssh/id_rsa.pub"
+  - name: volt_rsa
+    source_path: "~/.ssh/id_rsa"
+  - name: volt_rsa.pub
+    source_path: "~/.ssh/id_rsa.pub"
+  - name: vsg_rsa
+    source_path: "~/.ssh/id_rsa"
+  - name: vsg_rsa.pub
+    source_path: "~/.ssh/id_rsa.pub"
+# needed onboarding synchronizer doesn't require service code to be present when started
+  - name: exampleservice_rsa
+    source_path: "~/.ssh/id_rsa"
+  - name: exampleservice_rsa.pub
+    source_path: "~/.ssh/id_rsa.pub"
+
+# VM networks/bridges on head
+virt_nets:
+  - name: mgmtbr
+    ipv4_prefix: 192.168.122
+    head_vms: true
+
+# site domain suffix
+site_suffix: cord.lab
+
+# resolv.conf settings
+dns_search:
+  - "{{ site_suffix }}"
+
+# SSL server certificate generation
+server_certs:
+  - cn: "keystone.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=keystone.{{ site_suffix }}"
+    altnames:
+      - "DNS:keystone.{{ site_suffix }}"
+      - "DNS:keystone"
+  - cn: "xos-core.{{ site_suffix }}"
+    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=xos-core.{{ site_suffix }}"
+    altnames:
+      - "DNS:xos-core.{{ site_suffix }}"
+
+# NSD/Unbound settings
+nsd_zones:
+  - name: "{{ site_suffix }}"
+    ipv4_first_octets: 192.168.122
+    name_reverse_unbound: "168.192.in-addr.arpa"
+    soa: ns1
+    ns:
+      - { name: ns1 }
+    nodelist: head_vm_list
+    aliases:
+      - { name: "ns1" , dest: "head" }
+      - { name: "ns" , dest: "head" }
+      - { name: "apt-cache" , dest: "head" }
+
+name_on_public_interface: head
+
+# VTN network configuration
+management_network_cidr: 172.27.0.0/24
+management_network_ip: 172.27.0.1/24
+data_plane_ip: 10.168.0.253/24
+
+# CORD ONOS app version
+cord_app_version: 1.2-SNAPSHOT
+
+# If true, unbound listens on the head node's `ansible_default_ipv4` interface
+unbound_listen_on_default: True
+
+# turn this on, or override it when running the playbook with --extra-vars="on_cloudlab=True"
+on_cloudlab: False
+
+# turn this off, or override when running playbook with --extra-vars="on_maas=False"
+on_maas: True
+
+run_dist_upgrade: False
+
+maas_node_key: /etc/maas/ansible/id_rsa
+
+openstack_version: kilo
+
+juju_config_name: cord
+
+juju_config_path: /usr/local/src/juju_config.yml
+
+# Pull ONOS from local Docker registry rather than Docker Hub
+onos_docker_image: "docker-registry:5000/opencord/onos:candidate"
+
+keystone_admin_password: "{{ lookup('password', 'credentials/cord_keystone_admin chars=ascii_letters,digits') }}"
+
+deployment_flavors:
+  - m1.small
+  - m1.medium
+  - m1.large
+  - m1.xlarge
+
+apt_cacher_name: apt-cache
+
+apt_ssl_sites:
+  - apt.dockerproject.org
+  - butler.opencloud.cs.arizona.edu
+  - deb.nodesource.com
+
+charm_versions:
+  ceilometer: "cs:trusty/ceilometer-17"
+  ceilometer-agent: "cs:trusty/ceilometer-agent-13"
+  glance: "cs:trusty/glance-28"
+  keystone: "cs:trusty/keystone-33"
+  mongodb: "cs:trusty/mongodb-33"
+  percona-cluster: "cs:trusty/percona-cluster-31"
+  nagios: "cs:trusty/nagios-10"
+  neutron-api: "cs:~cordteam/trusty/neutron-api-4"
+  nova-cloud-controller: "cs:trusty/nova-cloud-controller-64"
+  nova-compute: "cs:~cordteam/trusty/nova-compute-2"
+  nrpe: "cs:trusty/nrpe-4"
+  ntp: "cs:trusty/ntp-14"
+  openstack-dashboard: "cs:trusty/openstack-dashboard-19"
+  rabbitmq-server: "cs:trusty/rabbitmq-server-42"
+
+head_vm_list: []
+
+head_lxd_list:
+  - name: "juju-1"
+    service: "juju"
+    aliases:
+      - "juju"
+    ipv4_last_octet: 10
+
+  - name: "ceilometer-1"
+    service: "ceilometer"
+    aliases:
+      - "ceilometer"
+    ipv4_last_octet: 20
+    forwarded_ports:
+      - { ext: 8777, int: 8777 }
+
+  - name: "glance-1"
+    service: "glance"
+    aliases:
+      - "glance"
+    ipv4_last_octet: 30
+    forwarded_ports:
+      - { ext: 9292, int: 9292 }
+
+  - name: "keystone-1"
+    service: "keystone"
+    aliases:
+      - "keystone"
+    ipv4_last_octet: 40
+    forwarded_ports:
+      - { ext: 35357, int: 35357 }
+      - { ext: 4990, int: 4990 }
+      - { ext: 5000, int: 5000 }
+
+  - name: "percona-cluster-1"
+    service: "percona-cluster"
+    aliases:
+      - "percona-cluster"
+    ipv4_last_octet: 50
+
+  - name: "nagios-1"
+    service: "nagios"
+    aliases:
+      - "nagios"
+    ipv4_last_octet: 60
+    forwarded_ports:
+      - { ext: 3128, int: 80 }
+
+  - name: "neutron-api-1"
+    service: "neutron-api"
+    aliases:
+      - "neutron-api"
+    ipv4_last_octet: 70
+    forwarded_ports:
+      - { ext: 9696, int: 9696 }
+
+  - name: "nova-cloud-controller-1"
+    service: "nova-cloud-controller"
+    aliases:
+      - "nova-cloud-controller"
+    ipv4_last_octet: 80
+    forwarded_ports:
+      - { ext: 8774, int: 8774 }
+
+  - name: "openstack-dashboard-1"
+    service: "openstack-dashboard"
+    aliases:
+      - "openstack-dashboard"
+    ipv4_last_octet: 90
+    forwarded_ports:
+      - { ext: 8080, int: 80 }
+
+  - name: "rabbitmq-server-1"
+    service: "rabbitmq-server"
+    aliases:
+      - "rabbitmq-server"
+    ipv4_last_octet: 100
+
+  - name: "mongodb-1"
+    service: "mongodb"
+    aliases:
+      - "mongodb"
+    ipv4_last_octet: 110
+
+lxd_service_list:
+  - ceilometer
+  - glance
+  - keystone
+  - mongodb
+  - nagios
+  - neutron-api
+  - nova-cloud-controller
+  - openstack-dashboard
+  - percona-cluster
+  - rabbitmq-server
+
+standalone_service_list:
+  - ntp
+  - nrpe
+  - ceilometer-agent
+
+
+service_relations:
+  - name: keystone
+    relations: [ "percona-cluster", "nrpe", ]
+
+  - name: nova-cloud-controller
+    relations: [ "percona-cluster", "rabbitmq-server", "glance", "keystone", "nrpe", ]
+
+  - name: glance
+    relations: [ "percona-cluster", "keystone", "nrpe", ]
+
+  - name: neutron-api
+    relations: [ "keystone",  "percona-cluster", "rabbitmq-server", "nova-cloud-controller", "nrpe", ]
+
+  - name: openstack-dashboard
+    relations: [ "keystone", "nrpe", ]
+
+  - name: nagios
+    relations: [ "nrpe", ]
+
+  - name: "percona-cluster:juju-info"
+    relations: [ "nrpe:general-info", ]
+
+  - name: rabbitmq-server
+    relations: [ "nrpe", ]
+
+  - name: ceilometer
+    relations: [ "mongodb", "rabbitmq-server", "nagios", "nrpe", ]
+
+  - name: "ceilometer:identity-service"
+    relations: [ "keystone:identity-service", ]
+
+  - name: "ceilometer:ceilometer-service"
+    relations: [ "ceilometer-agent:ceilometer-service", ]
+
+
+compute_relations:
+  - name: nova-compute
+    relations: [ "ceilometer-agent", "glance", "nova-cloud-controller", "nagios", "nrpe", ]
+
+  - name: "nova-compute:shared-db"
+    relations: [ "percona-cluster:shared-db", ]
+
+  - name: "nova-compute:amqp"
+    relations: [ "rabbitmq-server:amqp", ]
+
+  - name: ntp
+    relations: [ "nova-compute", ]
+
+
+xos_images:
+  - name: "trusty-server-multi-nic"
+    url: "http://www.vicci.org/opencloud/trusty-server-cloudimg-amd64-disk1.img.20170201"
+    checksum: "sha256:ebf007ba3ec1043b7cd011fc6668e2a1d1d4c69c41071e8513ab355df7a057cb"
+
+  - name: "vsg-1.1"
+    url: "http://www.vicci.org/cord/vsg-1.1.img"
+    checksum: "sha256:16b0beb6778aed0f5feecb05f8d5750e6c262f98e6011e99ddadf7d46a177b6f"
+
+  - name: "ceilometer-trusty-server-multi-nic"
+    url: "http://www.vicci.org/cord/ceilometer-trusty-server-multi-nic.compressed.qcow2"
+    checksum: "sha256:b77ef8d692b640568dea13df99fe1dfcb1f4bb4ac05408db9ff77399b34f754f"
+
+  - name: "ceilometer-service-trusty-server-multi-nic"
+    url: "http://www.vicci.org/cord/ceilometer-service-trusty-server-multi-nic.compressed.qcow2.20170131"
+    checksum: "sha256:f0341e283f0f2cb8f70cd1a6347e0081c9c8492ef34eb6397c657ef824800d4f"
diff --git a/roles/api-test-prep/tasks/main.yml b/roles/api-test-prep/tasks/main.yml
new file mode 100644
index 0000000..7878862
--- /dev/null
+++ b/roles/api-test-prep/tasks/main.yml
@@ -0,0 +1,14 @@
+---
+# api-test-prep/tasks/main.yml
+
+- name: Install node packages for api tests with npm
+  command: "npm install --production"
+  args:
+    chdir: "/opt/xos/tests/api"
+  tags:
+    - skip_ansible_lint # run during testing only
+
+- name: Install dredd-hooks with pip
+  pip:
+    name: dredd-hooks
+
diff --git a/roles/api-tests/defaults/main.yml b/roles/api-tests/defaults/main.yml
new file mode 100644
index 0000000..f78a3b2
--- /dev/null
+++ b/roles/api-tests/defaults/main.yml
@@ -0,0 +1,4 @@
+---
+# api-tests/defaults/main.yml
+
+cord_dir: "{{ hostvars['localhost']['ansible_user_dir'] + '/cord' }}"
diff --git a/roles/api-tests/tasks/main.yml b/roles/api-tests/tasks/main.yml
new file mode 100644
index 0000000..843a384
--- /dev/null
+++ b/roles/api-tests/tasks/main.yml
@@ -0,0 +1,25 @@
+---
+# api-tests/tasks/main.yml
+
+- name: Copy apiary.apib to target location
+  copy:
+    src: "{{ cord_dir }}/orchestration/xos/apiary.apib"
+    dest: "/opt/xos/tests/api/apiary.apib"
+
+- name: Run API tests
+  command: "npm test"
+  args:
+    chdir: "/opt/xos/tests/api"
+  register: api_tests_out
+  ignore_errors: yes
+  tags:
+    - skip_ansible_lint # run during testing only
+
+- name: Save output from API tests
+  copy:
+    content: "{{ api_tests_out.stdout_lines }}"
+    dest: "/tmp/api-tests.out"
+
+- name: Print output from API test
+  debug: var=api_tests_out.stdout_lines
+
diff --git a/roles/automation-integration/templates/do-enlist-compute-node.j2 b/roles/automation-integration/templates/do-enlist-compute-node.j2
index 1c96cce..3490c8c 100644
--- a/roles/automation-integration/templates/do-enlist-compute-node.j2
+++ b/roles/automation-integration/templates/do-enlist-compute-node.j2
@@ -3,6 +3,7 @@
 ID=$1
 HOSTNAME=$2
 LOG=/etc/maas/ansible/logs/$ID.log
+COMPUTE_USER=ubuntu
 
 INV=$(mktemp)
 cat >$INV <<EO_INV
@@ -10,17 +11,17 @@
 juju-head-node.cord.lab ansible_user={{ ansible_user_id }}
 
 [compute]
-$HOSTNAME ansible_user=ubuntu
+$HOSTNAME ansible_user=$COMPUTE_USER
 EO_INV
 
 echo "BEGIN INVENTORY FILE" >> $LOG
 cat $INV >> $LOG
 echo "END INVENTORY_FILE" >> $LOG
 
-echo "cd /opt/cord/build/platform-install; ansible-playbook --private-key=/etc/maas/ansible/id_rsa --extra-vars 'cord_in_a_box={{ cord_in_a_box }}' -i $INV cord-compute-playbook.yml" >> $LOG
+echo "cd /opt/cord/build/platform-install; ansible-playbook --private-key=/etc/maas/ansible/id_rsa -u $COMPUTE_USER --extra-vars '@{{ cord_dir }}/build/genconfig/config.yml' -i $INV cord-compute-playbook.yml" >> $LOG
 
 cd /opt/cord/build/platform-install
-ansible-playbook --private-key=/etc/maas/ansible/id_rsa --extra-vars 'cord_in_a_box={{ cord_in_a_box }}' -i $INV cord-compute-playbook.yml >> $LOG
+ansible-playbook --private-key=/etc/maas/ansible/id_rsa -u $COMPUTE_USER --extra-vars '@{{ cord_dir }}/build/genconfig/config.yml' -i $INV cord-compute-maas-playbook.yml >> $LOG
 
 RESULT=$?
 rm $INV
diff --git a/roles/compute-node-config/defaults/main.yml b/roles/compute-node-config/defaults/main.yml
new file mode 100644
index 0000000..70507cc
--- /dev/null
+++ b/roles/compute-node-config/defaults/main.yml
@@ -0,0 +1,14 @@
+---
+# compute-node-config/defaults/main.yml
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+# service configs referenced here are likely located in cord-profile/templates
+
+# used in openstack-compute-vtn.yaml.j2, referencing network in management-net.yaml.j2
+use_management_hosts: False
+vtn_management_host_net_interface: veth3
+
+# used in openstack-compute-vtn.yaml.j2, referencing service in fabric.yaml.j2
+use_fabric: False
+
diff --git a/roles/compute-node-config/tasks/main.yml b/roles/compute-node-config/tasks/main.yml
new file mode 100644
index 0000000..6500dbb
--- /dev/null
+++ b/roles/compute-node-config/tasks/main.yml
@@ -0,0 +1,15 @@
+---
+# compute-node-config/tasks/main.yml
+#
+# Build TOSCA to tell XOS that a new OpenStack compute node has been added
+
+- name: Create OpenStack compute node TOSCA
+  template:
+    src: "{{ item }}.j2"
+    dest: "{{ cord_profile_dir }}/{{ item }}"
+    owner: "{{ ansible_user_id }}"
+    mode: 0644
+  with_items:
+    - openstack-compute.yaml
+    - openstack-compute-vtn.yaml
+
diff --git a/roles/compute-node-config/templates/openstack-compute-vtn.yaml.j2 b/roles/compute-node-config/templates/openstack-compute-vtn.yaml.j2
new file mode 100644
index 0000000..b43e1e3
--- /dev/null
+++ b/roles/compute-node-config/templates/openstack-compute-vtn.yaml.j2
@@ -0,0 +1,116 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+imports:
+   - custom_types/xos.yaml
+
+description: Configures VTN networking for OpenStack compute nodes
+
+topology_template:
+  node_templates:
+
+# VTN ONOS app, fully defined in vtn-service.yaml
+    service#ONOS_CORD:
+      type: tosca.nodes.ONOSService
+      properties:
+        no-delete: true
+        no-create: true
+        no-update: true
+
+{% if use_fabric %}
+# Fabric, fully defined in fabric.yaml
+    service#ONOS_Fabric:
+      type: tosca.nodes.ONOSService
+      properties:
+        no-delete: true
+        no-create: true
+        no-update: true
+{% endif %}
+
+# VTN networking for OpenStack Compute Nodes
+{% for node in groups["compute"] %}
+{% if 'ipv4' in hostvars[node]['ansible_fabric'] %}
+
+# Compute node, fully defined in compute-nodes.yaml
+    {{ hostvars[node]['ansible_hostname'] }}:
+      type: tosca.nodes.Node
+      properties:
+        no-delete: true
+        no-create: true
+        no-update: true
+
+# VTN bridgeId field for node {{ hostvars[node]['ansible_hostname'] }}
+    {{ hostvars[node]['ansible_hostname'] }}_bridgeId_tag:
+      type: tosca.nodes.Tag
+      properties:
+        name: bridgeId
+        value: of:0000{{ hostvars[node]['ansible_fabric']['macaddress'] | hwaddr('bare') }}
+      requirements:
+        - target:
+            node: {{ hostvars[node]['ansible_hostname'] }}
+            relationship: tosca.relationships.TagsObject
+        - service:
+            node: service#ONOS_CORD
+            relationship: tosca.relationships.MemberOfService
+
+# VTN dataPlaneIntf field for node {{ hostvars[node]['ansible_hostname'] }}
+    {{ hostvars[node]['ansible_hostname'] }}_dataPlaneIntf_tag:
+      type: tosca.nodes.Tag
+      properties:
+        name: dataPlaneIntf
+        value: fabric
+      requirements:
+        - target:
+            node: {{ hostvars[node]['ansible_hostname'] }}
+            relationship: tosca.relationships.TagsObject
+        - service:
+            node: service#ONOS_CORD
+            relationship: tosca.relationships.MemberOfService
+
+# VTN dataPlaneIp field for node {{ hostvars[node]['ansible_hostname'] }}
+    {{ hostvars[node]['ansible_hostname'] }}_dataPlaneIp_tag:
+      type: tosca.nodes.Tag
+      properties:
+        name: dataPlaneIp
+        value: {{ ( hostvars[node]['ansible_fabric']['ipv4']['address'] ~ '/' ~ hostvars[node]['ansible_fabric']['ipv4']['netmask'] ) | ipaddr('cidr') }}
+      requirements:
+        - target:
+            node: {{ hostvars[node]['ansible_hostname'] }}
+            relationship: tosca.relationships.TagsObject
+        - service:
+            node: service#ONOS_CORD
+            relationship: tosca.relationships.MemberOfService
+
+{% if use_management_hosts %}
+    # VTN management interface field for node {{ hostvars[node]['ansible_hostname'] }}
+    {{ hostvars[node]['ansible_hostname'] }}_hostManagementIface_tag:
+      type: tosca.nodes.Tag
+      properties:
+        name: hostManagementIface
+        value: {{ vtn_management_host_net_interface }}
+      requirements:
+        - target:
+            node: {{ hostvars[node]['ansible_hostname'] }}
+            relationship: tosca.relationships.TagsObject
+        - service:
+            node: service#ONOS_CORD
+            relationship: tosca.relationships.MemberOfService
+{% endif %}
+
+{% if use_fabric %}
+    # Fabric location field for node {{ hostvars[node]['ansible_hostname'] }}
+    {{ hostvars[node]['ansible_hostname'] }}_location_tag:
+      type: tosca.nodes.Tag
+      properties:
+        name: location
+        value: of:0000000000000001/1
+      requirements:
+        - target:
+            node: {{ hostvars[node]['ansible_hostname'] }}
+            relationship: tosca.relationships.TagsObject
+        - service:
+            node: service#ONOS_Fabric
+            relationship: tosca.relationships.MemberOfService
+{% endif %}
+{% endif %}
+{% endfor %}
+
diff --git a/roles/xos-compute-setup/templates/nodes.yaml.j2 b/roles/compute-node-config/templates/openstack-compute.yaml.j2
similarity index 67%
rename from roles/xos-compute-setup/templates/nodes.yaml.j2
rename to roles/compute-node-config/templates/openstack-compute.yaml.j2
index 7ba953b..b0849dc 100644
--- a/roles/xos-compute-setup/templates/nodes.yaml.j2
+++ b/roles/compute-node-config/templates/openstack-compute.yaml.j2
@@ -3,7 +3,7 @@
 imports:
    - custom_types/xos.yaml
 
-description: list of compute nodes, created by platform-install
+description: Adds OpenStack compute nodes
 
 topology_template:
   node_templates:
@@ -11,12 +11,21 @@
 # Site/Deployment, fully defined in deployment.yaml
     {{ site_name }}:
       type: tosca.nodes.Site
+      properties:
+        no-delete: true
+        no-create: true
+        no-update: true
 
     {{ deployment_type }}:
       type: tosca.nodes.Deployment
+      properties:
+        no-delete: true
+        no-create: true
+        no-update: true
 
-# compute nodes
+# OpenStack compute nodes
 {% for node in groups["compute"] %}
+{% if 'ipv4' in hostvars[node]['ansible_fabric'] %}
     {{ hostvars[node]['ansible_hostname'] }}:
       type: tosca.nodes.Node
       requirements:
@@ -27,5 +36,6 @@
             node: {{ deployment_type }}
             relationship: tosca.relationships.MemberOfDeployment
 
+{% endif %}
 {% endfor %}
 
diff --git a/roles/compute-node-enable-maas/defaults/main.yml b/roles/compute-node-enable-maas/defaults/main.yml
new file mode 100644
index 0000000..b84e46d
--- /dev/null
+++ b/roles/compute-node-enable-maas/defaults/main.yml
@@ -0,0 +1,9 @@
+---
+# compute-node-enable-maas/defaults/main.yml
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+xos_admin_user: "xosadmin@opencord.org"
+xos_admin_pass: "{{ lookup('password', 'credentials/xosadmin@opencord.org chars=ascii_letters,digits') }}"
+
+xos_ui_port: 9000
diff --git a/roles/compute-node-enable-maas/tasks/main.yml b/roles/compute-node-enable-maas/tasks/main.yml
new file mode 100644
index 0000000..9892aa1
--- /dev/null
+++ b/roles/compute-node-enable-maas/tasks/main.yml
@@ -0,0 +1,39 @@
+---
+# compute-node-enable-maas/tasks/main.yml
+
+- name: Fetch generated nodes.yaml file
+  fetch:
+    src: "{{ cord_profile_dir + '/' + item }}"
+    dest: "/tmp/{{ item }}"
+    flat: yes
+    fail_on_missing: yes
+  with_items:
+    - openstack.yaml
+    - vtn-service.yaml
+    - openstack-compute.yaml
+    - openstack-compute-vtn.yaml
+
+- name: Load TOSCA to add OpenStack compute nodes, over REST
+  xostosca:
+    url: "http://xos.{{ site_suffix }}:{{ xos_ui_port }}/api/utility/tosca/run/"
+    user: "{{ xos_admin_user }}"
+    password:  "{{ xos_admin_pass }}"
+    recipe: "{{ lookup('file', '/tmp/' + item ) }}"
+  with_items:
+    - openstack.yaml
+    - openstack-compute.yaml
+
+- name: Pause to work around race in VTN or ONOS synchronizers
+  pause:
+    seconds: 20
+
+- name: Load TOSCA to enable VTN on OpenStack compute nodes, over REST
+  xostosca:
+    url: "http://xos.{{ site_suffix }}:{{ xos_ui_port }}/api/utility/tosca/run/"
+    user: "{{ xos_admin_user }}"
+    password:  "{{ xos_admin_pass }}"
+    recipe: "{{ lookup('file', '/tmp/' + item ) }}"
+  with_items:
+    - vtn-service.yaml
+    - openstack-compute-vtn.yaml
+
diff --git a/roles/compute-node-enable/defaults/main.yml b/roles/compute-node-enable/defaults/main.yml
new file mode 100644
index 0000000..a3a1e7c
--- /dev/null
+++ b/roles/compute-node-enable/defaults/main.yml
@@ -0,0 +1,5 @@
+---
+# compute-node-enable/defaults/main.yml
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
diff --git a/roles/compute-node-enable/tasks/main.yml b/roles/compute-node-enable/tasks/main.yml
new file mode 100644
index 0000000..5f3557c
--- /dev/null
+++ b/roles/compute-node-enable/tasks/main.yml
@@ -0,0 +1,17 @@
+---
+# compute-node-enable/tasks/main.yml
+
+- name: Load TOSCA to add OpenStack compute nodes
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} {{ cord_profile_dir }}/openstack-compute.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Pause to work around race in VTN or ONOS synchronizers
+  pause:
+    seconds: 20
+
+- name: Load TOSCA to enable VTN on OpenStack compute nodes
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} {{ cord_profile_dir }}/openstack-compute-vtn.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
diff --git a/roles/cord-profile/defaults/main.yml b/roles/cord-profile/defaults/main.yml
new file mode 100644
index 0000000..a5d2ce8
--- /dev/null
+++ b/roles/cord-profile/defaults/main.yml
@@ -0,0 +1,85 @@
+---
+# cord-profile/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+# used in xos.yaml.j2, if True, other synchronizer container will not be started
+frontend_only: False
+
+# Set to True if you want to copy the admin-openrc.sh openstack config file
+use_openstack: True
+
+# set to True to create the xos_redis container in the bootstrap context
+use_redis: True
+
+use_vtn: True
+
+xos_docker_networks:
+  - "xos"
+
+xos_docker_volumes: []
+
+xos_bootstrap_ui_port: 9001
+xos_ui_port: 9000
+
+xos_users: []
+
+xos_libraries:
+  - "ng-xos-lib"
+
+xos_services: []
+xos_service_sshkeys: []
+
+xos_images: []
+
+xos_tosca_config_templates: []
+
+xos_other_templates: []
+
+# GUI branding, used in xos_common_config.j2
+disable_minidashboard: "True"
+gui_branding_name: "OpenCloud"
+gui_branding_icon: "/static/logo.png"
+gui_branding_favicon: "/static/favicon.png"
+gui_branding_bg: "/static/bg.jpg"
+gui_service_view_class: False
+
+# used in deployment.yaml.j2
+xos_admin_user: "xosadmin@opencord.org"
+xos_admin_pass: "{{ lookup('password', 'credentials/xosadmin@opencord.org chars=ascii_letters,digits') }}"
+xos_admin_first: XOS
+xos_admin_last: Admin
+
+site_name: sitename
+site_humanname: "Site HumanName"
+
+deployment_type: deploymenttype
+
+deployment_flavors:
+  - m1.small
+  - m1.medium
+  - m1.large
+  - m1.xlarge
+
+# used in management-net.yaml.j2
+management_network_cidr: 172.27.0.0/24
+
+use_management_hosts: False
+management_hosts_net_cidr: 10.1.0.1/24
+management_hosts_net_range_xos_low: "10.1.0.128"
+management_hosts_net_range_xos_high: "10.1.0.254"
+
+# used in fabric.yaml.j2
+use_fabric: False
+fabric_network_cfg_json: "/opt/cord_profile/fabric-network-cfg.json"
+
+# used in volt-devices.yaml.j2
+volt_devices:
+  - name: voltdev
+    openflow_id: "of:1000000000000001"
+    access_devices: "2 222, 3 223, 4 224"
+    agent_mac: "AA:BB:CC:DD:EE:FF"
+    agent_port_mappings: "of:0000000000000002/2 DE:AD:BE:EF:BA:11, of:0000000000000002/3 BE:EF:DE:AD:BE:EF"
+
+cord_app_version: "1.2-SNAPSHOT"
diff --git a/roles/cord-profile/files/disable-onboarding.yaml b/roles/cord-profile/files/disable-onboarding.yaml
new file mode 100644
index 0000000..f597d70
--- /dev/null
+++ b/roles/cord-profile/files/disable-onboarding.yaml
@@ -0,0 +1,16 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Disable builds for the Onboarding synchronizer
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+    xos:
+      type: tosca.nodes.XOS
+      properties:
+        no-create: true
+        no-delete: true
+        enable_build: false
+
diff --git a/roles/cord-profile/files/enable-onboarding.yaml b/roles/cord-profile/files/enable-onboarding.yaml
new file mode 100644
index 0000000..73545e8
--- /dev/null
+++ b/roles/cord-profile/files/enable-onboarding.yaml
@@ -0,0 +1,16 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Enable builds for the Onboarding synchronizer
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+    xos:
+      type: tosca.nodes.XOS
+      properties:
+        no-create: true
+        no-delete: true
+        enable_build: true
+
diff --git a/roles/cord-profile/files/fixtures.yaml b/roles/cord-profile/files/fixtures.yaml
new file mode 100644
index 0000000..00da082
--- /dev/null
+++ b/roles/cord-profile/files/fixtures.yaml
@@ -0,0 +1,156 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Some basic fixtures
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+
+    xos:
+      type: tosca.nodes.XOS
+
+# -----------------------------------------------------------------------------
+# Network Parameter Types
+# -----------------------------------------------------------------------------
+
+    s_tag:
+      type: tosca.nodes.NetworkParameterType
+
+    c_tag:
+      type: tosca.nodes.NetworkParameterType
+
+    next_hop:
+      type: tosca.nodes.NetworkParameterType
+
+    device:
+      type: tosca.nodes.NetworkParameterType
+
+    bridge:
+      type: tosca.nodes.NetworkParameterType
+
+    neutron_port_name:
+      type: tosca.nodes.NetworkParameterType
+
+# ----------------------------------------------------------------------------
+# Roles
+# ----------------------------------------------------------------------------
+
+    siterole#admin:
+      type: tosca.nodes.SiteRole
+
+    siterole#pi:
+      type: tosca.nodes.SiteRole
+
+    siterole#tech:
+      type: tosca.nodes.SiteRole
+
+    tenantrole#admin:
+      type: tosca.nodes.TenantRole
+
+    tenantrole#access:
+      type: tosca.nodes.TenantRole
+
+    deploymentrole#admin:
+      type: tosca.nodes.DeploymentRole
+
+    slicerole#admin:
+      type: tosca.nodes.SliceRole
+
+    slicerole#access:
+      type: tosca.nodes.SliceRole
+
+# -----------------------------------------------------------------------------
+# Flavors
+# -----------------------------------------------------------------------------
+
+    m1.small:
+      type: tosca.nodes.Flavor
+
+    m1.medium:
+      type: tosca.nodes.Flavor
+
+    m1.large:
+      type: tosca.nodes.Flavor
+
+    m1.xlarge:
+      type: tosca.nodes.Flavor
+
+# -----------------------------------------------------------------------------
+# Dashboard Views
+# -----------------------------------------------------------------------------
+
+# Temporary removed, waiting for a new Angular Base Implementation
+#    xsh:
+#      type: tosca.nodes.DashboardView
+#      properties:
+#          url: template:xsh
+
+    Customize:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosDashboardManager
+          custom_icon: true
+
+    Diagnostic:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosDiagnostic
+          custom_icon: true
+
+    Truckroll:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosTruckroll
+          custom_icon: true
+
+    Monitoring:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosCeilometerDashboard
+
+    Subscribers:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosSubscribers
+
+    Tenant:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosTenant
+
+    Developer:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosDeveloper
+
+    Services Grid:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosServiceGrid
+
+# -----------------------------------------------------------------------------
+# Network Templates
+# -----------------------------------------------------------------------------
+
+    Private:
+      type: tosca.nodes.NetworkTemplate
+      properties:
+          visibility: private
+          translation: none
+
+    Public shared IPv4:
+      type: tosca.nodes.NetworkTemplate
+      properties:
+          visibility: private
+          translation: NAT
+          shared_network_name: nat-net
+
+    Public dedicated IPv4:
+      type: tosca.nodes.NetworkTemplate
+      properties:
+          visibility: public
+          translation: none
+          shared_network_name: ext-net
+
diff --git a/roles/cord-profile/tasks/main.yml b/roles/cord-profile/tasks/main.yml
new file mode 100644
index 0000000..d0f9874
--- /dev/null
+++ b/roles/cord-profile/tasks/main.yml
@@ -0,0 +1,95 @@
+---
+# cord-profile/tasks/main.yml
+# Constructs a CORD service profile directory and configuration files
+
+- name: Create cord_profile directory
+  become: yes
+  file:
+    path: "{{ cord_profile_dir }}"
+    state: directory
+    mode: 0755
+    owner: "{{ ansible_user_id }}"
+    group: "{{ ansible_user_gid }}"
+
+- name: Create subdirectories inside cord_profile directory
+  file:
+    path: "{{ cord_profile_dir }}/{{ item }}"
+    state: directory
+    mode: 0755
+  with_items:
+    - key_import
+    - onboarding-docker-compose
+    - images
+
+- name: Copy ssh keys to key_import directory
+  copy:
+    remote_src: True # file is local to the remote machine
+    src: "{{ item.source_path | expanduser }}"
+    dest: "{{ cord_profile_dir }}/key_import/{{ item.name }}"
+    mode: 0600
+  with_items: "{{ xos_service_sshkeys }}"
+
+- name: Copy node_key to cord_profile directory
+  copy:
+    remote_src: True # file is local to the remote machine
+    src: "{{ ansible_user_dir }}/node_key"
+    dest: "{{ cord_profile_dir }}/node_key"
+    mode: 0600
+
+- name: Copy over core api key
+  copy:
+    src: "{{ playbook_dir }}/pki/intermediate_ca/private/xos-core.{{ site_suffix }}_key.pem"
+    dest: "{{ cord_profile_dir }}/core_api_key.pem"
+    mode: 0600
+
+- name: Copy over core api cert
+  copy:
+    src: "{{ playbook_dir }}/pki/intermediate_ca/certs/xos-core.{{ site_suffix }}_cert_chain.pem"
+    dest: "{{ cord_profile_dir }}/core_api_cert.pem"
+
+- name: Download Glance VM images
+  get_url:
+    url: "{{ item.url }}"
+    checksum: "{{ item.checksum }}"
+    dest: "{{ cord_profile_dir }}/images/{{ item.name }}.qcow2"
+  with_items: "{{ xos_images }}"
+
+- name: Copy over commonly used and utility TOSCA files
+  copy:
+    src: "{{ item }}"
+    dest: "{{ cord_profile_dir }}/{{ item }}"
+  with_items:
+    - fixtures.yaml
+    - enable-onboarding.yaml
+    - disable-onboarding.yaml
+
+- name: Create templated XOS configuration files
+  template:
+    src: "{{ item }}.j2"
+    dest: "{{ cord_profile_dir }}/{{ item }}"
+    mode: 0644
+  with_items:
+    - xos_common_config
+    - deployment.yaml
+    - xos.yaml
+    - xos-bootstrap-docker-compose.yaml
+
+- name: Create profile specific templated TOSCA config files
+  template:
+    src: "{{ item }}.j2"
+    dest: "{{ cord_profile_dir }}/{{ item }}"
+  with_items: "{{ xos_tosca_config_templates }}"
+
+- name: Create profile specific templated non-TOSCA files
+  template:
+    src: "{{ item }}.j2"
+    dest: "{{ cord_profile_dir }}/{{ item }}"
+  with_items: "{{ xos_other_templates }}"
+
+- name: Copy admin_openrc.sh
+  when: use_openstack
+  copy:
+    remote_src: True
+    src: "{{ ansible_user_dir }}/admin-openrc.sh"
+    dest: "{{ cord_profile_dir }}/admin-openrc.sh"
+
diff --git a/roles/cord-profile/templates/cdn-content.yaml.j2 b/roles/cord-profile/templates/cdn-content.yaml.j2
new file mode 100644
index 0000000..7b6ef00
--- /dev/null
+++ b/roles/cord-profile/templates/cdn-content.yaml.j2
@@ -0,0 +1,224 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Hypercache CDN Content
+
+imports:
+   - custom_types/xos.yaml
+   - custom_types/cdn.yaml
+
+topology_template:
+  node_templates:
+    HyperCache:
+      type: tosca.nodes.CDNService
+      properties:
+          # HyperCache service must already exist before running this recipe
+          no-create: true
+          no-delete: true
+          no-update: true
+
+    # Setup the CDN Service Provider
+
+    main_service_provider:
+        type: tosca.nodes.ServiceProvider
+        requirements:
+           - hpc_service:
+                 node: HyperCache
+                 relationship: tosca.relationships.MemberOfService
+
+    # Wall Street Journal Content Provider
+
+    wsj_content:
+        type: tosca.nodes.ContentProvider
+        requirements:
+            - service_provider:
+                  node: main_service_provider
+                  relationship: tosca.relationships.MemberOfServiceProvider
+
+    www.wsj.com:
+        type: tosca.nodes.CDNPrefix
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+             - default_origin_server:
+                   node: http_www.wsj.com
+                   relationship: tosca.relationships.DefaultOriginServer
+
+    si.wsj.net:
+        type: tosca.nodes.CDNPrefix
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+             - default_origin_server:
+                   node: http_si.wsj.net
+                   relationship: tosca.relationships.DefaultOriginServer
+
+    s.wsj.net:
+        type: tosca.nodes.CDNPrefix
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+             - default_origin_server:
+                   node: http_s.wsj.net
+                   relationship: tosca.relationships.DefaultOriginServer
+
+    ore.wsj.net:
+        type: tosca.nodes.CDNPrefix
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+             - default_origin_server:
+                   node: http_ore.wsj.net
+                   relationship: tosca.relationships.DefaultOriginServer
+
+    http_www.wsj.com:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    http_si.wsj.net:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    http_s.wsj.net:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    http_ore.wsj.net:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: wsj_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    # ON.Lab content provider
+
+    on_lab_content:
+        type: tosca.nodes.ContentProvider
+        requirements:
+            - service_provider:
+                  node: main_service_provider
+                  relationship: tosca.relationships.MemberOfServiceProvider
+
+    # Create CDN prefix onlab.vicci.org
+    onlab.vicci.org:
+        type: tosca.nodes.CDNPrefix
+        requirements:
+             - content_provider:
+                   node: on_lab_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    http_onos-videos.s3.amazonaws.com:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: on_lab_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    # Create origin server s3-us-west-1.amazonaws.com
+    http_s3-us-west-1.amazonaws.com:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: on_lab_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    # Create origin server s3.amazonaws.com
+    http_s3.amazonaws.com:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: on_lab_content
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    # Test Content Provider
+
+    testcp2:
+        type: tosca.nodes.ContentProvider
+        requirements:
+            - service_provider:
+                  node: main_service_provider
+                  relationship: tosca.relationships.MemberOfServiceProvider
+
+    http_www.cs.arizona.edu:
+        type: tosca.nodes.OriginServer
+        requirements:
+             - content_provider:
+                   node: testcp2
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+    test-cdn.opencloud.us:
+        type: tosca.nodes.CDNPrefix
+        requirements:
+             - content_provider:
+                   node: testcp2
+                   relationship: tosca.relationships.MemberOfContentProvider
+
+             - default_origin_server:
+                   node: http_www.cs.arizona.edu
+                   relationship: tosca.relationships.DefaultOriginServer
+
+    # Health Checks
+
+    healthcheck_dns_onlab.vicci.org:
+        type: tosca.nodes.HpcHealthCheck
+        requirements:
+           - hpc_service:
+                 node: HyperCache
+                 relationship: tosca.relationships.MemberOfService
+        properties:
+           kind: dns
+           resource_name: onlab.vicci.org
+
+    healthcheck_dns_test-cdn.opencloud.us:
+        type: tosca.nodes.HpcHealthCheck
+        requirements:
+           - hpc_service:
+                 node: HyperCache
+                 relationship: tosca.relationships.MemberOfService
+        properties:
+           kind: dns
+           resource_name: test-cdn.opencloud.us
+
+    healthcheck_http_test-cdn-index:
+        type: tosca.nodes.HpcHealthCheck
+        requirements:
+           - hpc_service:
+                 node: HyperCache
+                 relationship: tosca.relationships.MemberOfService
+        properties:
+           kind: http
+           resource_name: test-cdn.opencloud.us:/
+           result_contains: Lowenthal
+
+    healthcheck_http_onlab_onos_image:
+        type: tosca.nodes.HpcHealthCheck
+        requirements:
+           - hpc_service:
+                 node: HyperCache
+                 relationship: tosca.relationships.MemberOfService
+        properties:
+           kind: http
+           resource_name: onlab.vicci.org:/onos/vm/onos-tutorial-1.1.0r220-ovf.zip
+
+    healthcheck_http_onlab_mininet_image:
+        type: tosca.nodes.HpcHealthCheck
+        requirements:
+           - hpc_service:
+                 node: HyperCache
+                 relationship: tosca.relationships.MemberOfService
+        properties:
+           kind: http
+           resource_name: onlab.vicci.org:/mininet-vm/mininet-2.1.0-130919-ubuntu-13.04-server-amd64-ovf.zip
+
diff --git a/roles/xos-install/templates/cord-services.yaml.j2 b/roles/cord-profile/templates/cord-services.yaml.j2
similarity index 95%
rename from roles/xos-install/templates/cord-services.yaml.j2
rename to roles/cord-profile/templates/cord-services.yaml.j2
index b182f04..0c3c2fd 100644
--- a/roles/xos-install/templates/cord-services.yaml.j2
+++ b/roles/cord-profile/templates/cord-services.yaml.j2
@@ -23,12 +23,14 @@
         no-delete: true
         no-update: true
 
+{% if use_management_hosts %}
     management_hosts:
       type: tosca.nodes.network.Network.XOS
       properties:
         no-create: true
         no-delete: true
         no-update: true
+{% endif %}
 
 # ONOS_CORD, fully created in vtn.yaml
     service#ONOS_CORD:
@@ -167,6 +169,11 @@
         - management:
             node: management
             relationship: tosca.relationships.ConnectsToNetwork
+{% if use_management_hosts %}
+        - management_hosts:
+            node: management_hosts
+            relationship: tosca.relationships.ConnectsToNetwork
+{% endif %}
         - image:
             node: image#vsg-1.1
             relationship: tosca.relationships.DefaultImage
diff --git a/roles/xos-install/templates/deployment.yaml.j2 b/roles/cord-profile/templates/deployment.yaml.j2
similarity index 77%
rename from roles/xos-install/templates/deployment.yaml.j2
rename to roles/cord-profile/templates/deployment.yaml.j2
index 2d5dfd1..0987b7c 100644
--- a/roles/xos-install/templates/deployment.yaml.j2
+++ b/roles/cord-profile/templates/deployment.yaml.j2
@@ -51,6 +51,23 @@
               relationship: tosca.relationships.SupportsDeployment
 
 # XOS Users
+# Default admin user account
+    {{ xos_admin_user }}:
+      type: tosca.nodes.User
+      properties:
+        password: {{ xos_admin_pass }}
+        firstname: {{ xos_admin_first }}
+        lastname: {{ xos_admin_last }}
+        is_admin: True
+      requirements:
+        - site:
+            node: {{ site_name }}
+            relationship: tosca.relationships.MemberOfSite
+        - tenant_dashboard:
+              node: Tenant
+              relationship: tosca.relationships.UsesDashboard
+
+# All other users
 {% for user in xos_users %}
     {{ user.email }}:
       type: tosca.nodes.User
diff --git a/roles/cord-profile/templates/fabric-network-cfg.json.j2 b/roles/cord-profile/templates/fabric-network-cfg.json.j2
new file mode 100644
index 0000000..8609ef8
--- /dev/null
+++ b/roles/cord-profile/templates/fabric-network-cfg.json.j2
@@ -0,0 +1,100 @@
+{
+    "ports" : {
+        "of:0000000000000001/1" : {
+            "interfaces" : [
+                {
+                    "ips" : [ "10.6.1.0/24" ]
+                }
+            ]
+        },
+        "of:0000000000000001/2" : {
+            "interfaces" : [
+                {
+                    "ips" : [ "10.6.1.0/24" ]
+                }
+            ]
+        },
+        "of:0000000000000002/3" : {
+            "interfaces" : [
+                {
+                    "ips" : [ "10.6.2.0/24" ]
+                }
+            ]
+        },
+        "of:0000000000000002/4" : {
+            "interfaces" : [
+                {
+                    "ips" : [ "10.6.2.0/24" ]
+                }
+            ]
+        }
+    },
+    "devices" : {
+        "of:0000000000000001" : {
+            "segmentrouting" : {
+                "name" : "Leaf-R1",
+                "nodeSid" : 101,
+                "routerIp" : "10.6.1.254",
+                "routerMac" : "00:00:00:00:01:80",
+                "isEdgeRouter" : true,
+                "adjacencySids" : []
+            }
+        },
+        "of:0000000000000002" : {
+            "segmentrouting" : {
+                "name" : "Leaf-R2",
+                "nodeSid" : 102,
+                "routerIp" : "10.6.2.254",
+                "routerMac" : "00:00:00:00:02:80",
+                "isEdgeRouter" : true,
+                "adjacencySids" : []
+            }
+        },
+        "of:0000000000000191" : {
+            "segmentrouting" : {
+                "name" : "Spine-R1",
+                "nodeSid" : 103,
+                "routerIp" : "192.168.0.11",
+                "routerMac" : "00:00:01:00:11:80",
+                "isEdgeRouter" : false,
+                "adjacencySids" : []
+            }
+        },
+        "of:0000000000000192" : {
+            "segmentrouting" : {
+                "name" : "Spine-R2",
+                "nodeSid" : 104,
+                "routerIp" : "192.168.0.22",
+                "routerMac" : "00:00:01:00:22:80",
+                "isEdgeRouter" : false,
+                "adjacencySids" : []
+            }
+        }
+    },
+    "hosts" : {
+        "00:00:00:00:00:01/-1" : {
+            "basic": {
+                "ips": ["10.6.1.1"],
+                "location": "of:0000000000000001/1"
+            }
+        },
+        "00:00:00:00:00:02/-1" : {
+            "basic": {
+                "ips": ["10.6.1.2"],
+                "location": "of:0000000000000001/2"
+            }
+        },
+        "00:00:00:00:00:03/-1" : {
+            "basic": {
+                "ips": ["10.6.2.1"],
+                "location": "of:0000000000000002/3"
+            }
+        },
+        "00:00:00:00:00:04/-1" : {
+            "basic": {
+                "ips": ["10.6.2.2"],
+                "location": "of:0000000000000002/4"
+            }
+        }
+    }
+}
diff --git a/roles/xos-install/templates/fabric.yaml.j2 b/roles/cord-profile/templates/fabric-service.yaml.j2
similarity index 65%
rename from roles/xos-install/templates/fabric.yaml.j2
rename to roles/cord-profile/templates/fabric-service.yaml.j2
index 664505f..36ad25a 100644
--- a/roles/xos-install/templates/fabric.yaml.j2
+++ b/roles/cord-profile/templates/fabric-service.yaml.j2
@@ -3,8 +3,7 @@
 imports:
    - custom_types/xos.yaml
 
-description: fabric configuration generated by platform-install
-
+description: fabric services, generated by platform-install
 
 topology_template:
   node_templates:
@@ -20,7 +19,7 @@
           replaces: service_ONOS_Fabric
           rest_onos/v1/network/configuration/: { get_artifact: [ SELF, fabric_network_cfg_json, LOCAL_FILE ] }
       artifacts:
-          fabric_network_cfg_json: /root/setup/network-cfg-quickstart.json
+          fabric_network_cfg_json: {{ fabric_network_cfg_json }}
 
     service#fabric:
       type: tosca.nodes.FabricService
@@ -40,23 +39,3 @@
       properties:
           dependencies: org.onosproject.drivers, org.onosproject.openflow-base, org.onosproject.netcfghostprovider, org.onosproject.netcfglinksprovider, org.onosproject.segmentrouting, org.onosproject.vrouter, org.onosproject.hostprovider
 
-{% for node in groups["compute"] %}
-    {{ node }}:
-      type: tosca.nodes.Node
-
-    # Fabric location field for node $NODE
-    {{ node }}_location_tag:
-      type: tosca.nodes.Tag
-      properties:
-          name: location
-          value: of:0000000000000001/1
-      requirements:
-          - target:
-              node: {{ node }}
-              relationship: tosca.relationships.TagsObject
-          - service:
-              node: service#ONOS_Fabric
-              relationship: tosca.relationships.MemberOfService
-
-{% endfor %}
-
diff --git a/roles/xos-install/templates/management-net.yaml.j2 b/roles/cord-profile/templates/management-net.yaml.j2
similarity index 65%
rename from roles/xos-install/templates/management-net.yaml.j2
rename to roles/cord-profile/templates/management-net.yaml.j2
index 79ea589..781dbf3 100644
--- a/roles/xos-install/templates/management-net.yaml.j2
+++ b/roles/cord-profile/templates/management-net.yaml.j2
@@ -20,13 +20,6 @@
         translation: none
         vtn_kind: MANAGEMENT_LOCAL
 
-    management_hosts_template:
-      type: tosca.nodes.NetworkTemplate
-      properties:
-          visibility: private
-          translation: none
-          vtn_kind: MANAGEMENT_HOST
-
     management:
       type: tosca.nodes.network.Network
       properties:
@@ -40,6 +33,30 @@
             node: {{ site_name }}_management
             relationship: tosca.relationships.MemberOfSlice
 
+{% if use_management_hosts %}
+    management_hosts_template:
+      type: tosca.nodes.NetworkTemplate
+      properties:
+        visibility: private
+        translation: none
+        vtn_kind: MANAGEMENT_HOST
+
+    management_hosts:
+      type: tosca.nodes.network.Network
+      properties:
+        ip_version: 4
+        cidr: {{ management_hosts_net_cidr }}
+        start_ip: {{ management_hosts_net_range_xos_low }}
+        end_ip: {{ management_hosts_net_range_xos_high }}
+      requirements:
+        - network_template:
+            node: management_hosts_template
+            relationship: tosca.relationships.UsesNetworkTemplate
+        - owner:
+            node: {{ site_name }}_management
+            relationship: tosca.relationships.MemberOfSlice
+{% endif %}
+
     {{ site_name }}_management:
       description: This slice exists solely to own the management network
       type: tosca.nodes.Slice
diff --git a/roles/cord-profile/templates/mock-mcord.yaml.j2 b/roles/cord-profile/templates/mock-mcord.yaml.j2
new file mode 100644
index 0000000..060f829
--- /dev/null
+++ b/roles/cord-profile/templates/mock-mcord.yaml.j2
@@ -0,0 +1,319 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Setup CORD-related services
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+    # M-CORD Services
+    
+    # RAN
+    vBBU:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /mcord/?service=vBBU
+          kind: RAN
+
+    eSON:
+      type: tosca.nodes.Service
+      properties:
+          view_url: http://www.google.com
+          kind: RAN
+
+    # EPC
+    vMME:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /mcord/?service=vMME
+          kind: EPC
+
+    vSGW:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /mcord/?service=vSGW
+          kind: EPC
+
+    vPGW:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /mcord/?service=vPGW
+          kind: EPC
+
+    # EDGE
+    Cache:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /mcord/?service=Cache
+          icon_url: /static/mCordServices/service_cache.png
+          kind: EDGE
+
+    Firewall:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /mcord/?service=Firewall
+          icon_url: /static/mCordServices/service_firewall.png
+          kind: EDGE
+
+    Video Optimization:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /mcord/?service=Video%20Optimization
+          icon_url: /static/mCordServices/service_video.png
+          kind: EDGE
+          
+    # Images
+    trusty-server-multi-nic:
+      type: tosca.nodes.Image
+      properties:
+         disk_format: QCOW2
+         container_format: BARE
+
+    # Deployments
+    StanfordDeployment:
+      type: tosca.nodes.Deployment
+      properties:
+          flavors: m1.large, m1.medium, m1.small
+      requirements:
+          - image:
+              node: trusty-server-multi-nic
+              relationship: tosca.relationships.SupportsImage
+
+    # Site
+    stanford:
+      type: tosca.nodes.Site
+      properties:
+          display_name: Stanford University
+          site_url: https://www.stanford.edu/
+      requirements:
+          - deployment:
+               node: StanfordDeployment
+               relationship: tosca.relationships.MemberOfDeployment
+          - controller:
+               node: CloudLab
+               relationship: tosca.relationships.UsesController
+
+
+    # Nodes
+    node1.stanford.edu:
+      type: tosca.nodes.Node
+      requirements:
+        - site:
+            node: stanford
+            relationship: tosca.relationships.MemberOfSite
+        - deployment:
+            node: StanfordDeployment
+            relationship: tosca.relationships.MemberOfDeployment
+
+    # Slices
+    stanford_slice:
+      description: Slice that contains sample instances
+      type: tosca.nodes.Slice
+      requirements:
+          - site:
+              node: stanford
+              relationship: tosca.relationships.MemberOfSite
+
+    # Instances
+    BBU_service_instance1:
+      type: tosca.nodes.Compute
+      capabilities:
+        # Host container properties
+        host:
+         properties:
+           num_cpus: 1
+           disk_size: 10 GB
+           mem_size: 4 MB
+        # Guest Operating System properties
+        os:
+          properties:
+            # host Operating System image properties
+            architecture: x86_64
+            type: linux
+            distribution: ubuntu
+            version: 14.04
+      requirements:
+          - slice:
+                node: stanford_slice
+                relationship: tosca.relationships.MemberOfSlice
+
+    BBU_service_instance2:
+      type: tosca.nodes.Compute
+      capabilities:
+        # Host container properties
+        host:
+         properties:
+           num_cpus: 1
+           disk_size: 10 GB
+           mem_size: 4 MB
+        # Guest Operating System properties
+        os:
+          properties:
+            # host Operating System image properties
+            architecture: x86_64
+            type: linux
+            distribution: ubuntu
+            version: 14.04
+      requirements:
+          - slice:
+                node: stanford_slice
+                relationship: tosca.relationships.MemberOfSlice
+
+    MME_service_instance1:
+      type: tosca.nodes.Compute
+      capabilities:
+        # Host container properties
+        host:
+         properties:
+           num_cpus: 1
+           disk_size: 10 GB
+           mem_size: 4 MB
+        # Guest Operating System properties
+        os:
+          properties:
+            # host Operating System image properties
+            architecture: x86_64
+            type: linux
+            distribution: ubuntu
+            version: 14.04
+      requirements:
+          - slice:
+                node: stanford_slice
+                relationship: tosca.relationships.MemberOfSlice
+
+    SGW_service_instance1:
+      type: tosca.nodes.Compute
+      capabilities:
+        # Host container properties
+        host:
+         properties:
+           num_cpus: 1
+           disk_size: 10 GB
+           mem_size: 4 MB
+        # Guest Operating System properties
+        os:
+          properties:
+            # host Operating System image properties
+            architecture: x86_64
+            type: linux
+            distribution: ubuntu
+            version: 14.04
+      requirements:
+          - slice:
+                node: stanford_slice
+                relationship: tosca.relationships.MemberOfSlice
+
+    PGW_service_instance1:
+      type: tosca.nodes.Compute
+      capabilities:
+        # Host container properties
+        host:
+         properties:
+           num_cpus: 1
+           disk_size: 10 GB
+           mem_size: 4 MB
+        # Guest Operating System properties
+        os:
+          properties:
+            # host Operating System image properties
+            architecture: x86_64
+            type: linux
+            distribution: ubuntu
+            version: 14.04
+      requirements:
+          - slice:
+                node: stanford_slice
+                relationship: tosca.relationships.MemberOfSlice
+
+    # Let's add a user who can be administrator of the household
+    johndoe@stanford.us:
+      type: tosca.nodes.User
+      properties:
+          password: letmein
+          firstname: john
+          lastname: doe
+      requirements:
+          - site:
+              node: stanford
+              relationship: tosca.relationships.MemberOfSite
+
+    # A subscriber
+    Stanford:
+       type: tosca.nodes.CORDSubscriber
+       properties:
+           service_specific_id: 123
+           firewall_enable: false
+           cdn_enable: false
+           url_filter_enable: false
+           url_filter_level: R
+       requirements:
+          - house_admin:
+              node: johndoe@stanford.us
+              relationship: tosca.relationships.AdminPrivilege
+
+    Barbera Lapinski:
+       type: tosca.nodes.CORDUser
+       properties:
+           mac: 01:02:03:04:05:06
+           level: PG_13
+       requirements:
+           - household:
+               node: Stanford
+               relationship: tosca.relationships.SubscriberDevice
+
+    Norbert Shumway:
+       type: tosca.nodes.CORDUser
+       properties:
+           mac: 90:E2:BA:82:F9:75
+           level: PG_13
+       requirements:
+           - household:
+               node: Stanford
+               relationship: tosca.relationships.SubscriberDevice
+
+    Fay Muldoon:
+       type: tosca.nodes.CORDUser
+       properties:
+           mac: 68:5B:35:9D:91:D5
+           level: PG_13
+       requirements:
+           - household:
+               node: Stanford
+               relationship: tosca.relationships.SubscriberDevice
+
+    Janene Earnest:
+       type: tosca.nodes.CORDUser
+       properties:
+           mac: 34:36:3B:C9:B6:A6
+           level: PG_13
+       requirements:
+           - household:
+               node: Stanford
+               relationship: tosca.relationships.SubscriberDevice
+
+
+    Topology:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosMcordTopology
+
+    Ceilometer:
+      type: tosca.nodes.DashboardView
+      properties:
+          url: template:xosCeilometerDashboard
+
+    padmin@vicci.org:
+      type: tosca.nodes.User
+      properties:
+          firstname: XOS
+          lastname: admin
+          is_admin: true
+      requirements:
+          - mcord_dashboard:
+              node: Topology
+              relationship: tosca.relationships.UsesDashboard
+          - ceilometer_dashboard:
+              node: Ceilometer
+              relationship: tosca.relationships.UsesDashboard
+
diff --git a/roles/cord-profile/templates/mock-onos.yaml.j2 b/roles/cord-profile/templates/mock-onos.yaml.j2
new file mode 100644
index 0000000..1f733f9
--- /dev/null
+++ b/roles/cord-profile/templates/mock-onos.yaml.j2
@@ -0,0 +1,16 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+imports:
+   - custom_types/xos.yaml
+
+description: enough onos config to make the mock work
+
+topology_template:
+  node_templates:
+
+    service#ONOS_Fabric:
+      type: tosca.nodes.ONOSService
+
+    service#ONOS_CORD:
+      type: tosca.nodes.ONOSService
+
diff --git a/roles/xos-install/templates/openstack.yaml.j2 b/roles/cord-profile/templates/openstack.yaml.j2
similarity index 97%
rename from roles/xos-install/templates/openstack.yaml.j2
rename to roles/cord-profile/templates/openstack.yaml.j2
index 65d2338..07882f1 100644
--- a/roles/xos-install/templates/openstack.yaml.j2
+++ b/roles/cord-profile/templates/openstack.yaml.j2
@@ -51,7 +51,7 @@
           admin_tenant: { get_script_env: [ SELF, adminrc, OS_TENANT_NAME, LOCAL_FILE] }
           domain: Default
       artifacts:
-          adminrc: /root/setup/admin-openrc.sh
+          adminrc: /opt/cord_profile/admin-openrc.sh
 
 # Site - adds openstack controller to site defined in deployment.yaml
     {{ site_name }}:
diff --git a/roles/xos-install/templates/public-net.yaml.j2 b/roles/cord-profile/templates/public-net.yaml.j2
similarity index 99%
rename from roles/xos-install/templates/public-net.yaml.j2
rename to roles/cord-profile/templates/public-net.yaml.j2
index 26573ed..cf111ca 100644
--- a/roles/xos-install/templates/public-net.yaml.j2
+++ b/roles/cord-profile/templates/public-net.yaml.j2
@@ -12,11 +12,13 @@
     {{ site_name }}:
       type: tosca.nodes.Site
 
+
 # vrouter service, fully created in cord-service.yaml
     service#vrouter:
       type: tosca.nodes.VRouterService
 
 # public network
+
     public_template:
       type: tosca.nodes.NetworkTemplate
       properties:
diff --git a/roles/cord-profile/templates/sample.yaml.j2 b/roles/cord-profile/templates/sample.yaml.j2
new file mode 100644
index 0000000..6a3324c
--- /dev/null
+++ b/roles/cord-profile/templates/sample.yaml.j2
@@ -0,0 +1,92 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >
+    * Some sample data to populate the demo frontend
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+    trusty-server-multi-nic:
+      type: tosca.nodes.Image
+      properties:
+         disk_format: QCOW2
+         container_format: BARE
+
+    {{ deployment_type }}:
+      type: tosca.nodes.Deployment
+      properties:
+          flavors: m1.large, m1.medium, m1.small
+      requirements:
+          - image:
+              node: trusty-server-multi-nic
+              relationship: tosca.relationships.SupportsImage
+
+    CloudLab:
+      type: tosca.nodes.Controller
+      requirements:
+          - deployment:
+              node: {{ deployment_type }}
+              relationship: tosca.relationships.ControllerDeployment
+      properties:
+          backend_type: OpenStack
+          version: Juno
+          auth_url: http://sample/v2
+          admin_user: admin
+          admin_password: adminpassword
+          admin_tenant: admin
+          domain: Default
+
+    {{ site_name }}:
+      type: tosca.nodes.Site
+      properties:
+          display_name: {{ site_name }}
+          site_url: http://opencloud.us/
+      requirements:
+          - deployment:
+               node: {{ deployment_type }}
+               relationship: tosca.relationships.MemberOfDeployment
+          - controller:
+               node: CloudLab
+               relationship: tosca.relationships.UsesController
+
+    Public shared IPv4:
+      type: tosca.nodes.NetworkTemplate
+      properties:
+          visibility: private
+          translation: NAT
+
+    {{ xos_admin_user }}:
+      type: tosca.nodes.User
+      properties:
+        password: {{ xos_admin_pass }}
+        firstname: {{ xos_admin_first }}
+        lastname: {{ xos_admin_last }}
+        is_admin: True
+        is_active: True
+      requirements:
+        - site:
+            node: {{ site_name }}
+            relationship: tosca.relationships.MemberOfSite
+
+    node1.opencloud.us:
+      type: tosca.nodes.Node
+      requirements:
+        - site:
+            node: {{ site_name }}
+            relationship: tosca.relationships.MemberOfSite
+        - deployment:
+            node: {{ deployment_type }}
+            relationship: tosca.relationships.MemberOfDeployment
+
+    node2.opencloud.us:
+      type: tosca.nodes.Node
+      requirements:
+        - site:
+            node: {{ site_name }}
+            relationship: tosca.relationships.MemberOfSite
+        - deployment:
+            node: {{ deployment_type }}
+            relationship: tosca.relationships.MemberOfDeployment
+
diff --git a/roles/cord-profile/templates/services.yaml.j2 b/roles/cord-profile/templates/services.yaml.j2
new file mode 100644
index 0000000..055fa57
--- /dev/null
+++ b/roles/cord-profile/templates/services.yaml.j2
@@ -0,0 +1,67 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Created by platform-install/roles/cord-profile/templates/services.yaml.j2
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+
+    # CORD Services
+    service#vtr:
+      type: tosca.nodes.Service
+      properties:
+          view_url: /admin/vtr/vtrservice/$id$/
+          kind: vTR
+          replaces: service_vtr
+
+    service#volt:
+      type: tosca.nodes.VOLTService
+      requirements:
+          - vsg_tenant:
+              node: service#vsg
+              relationship: tosca.relationships.TenantOfService
+      properties:
+          view_url: /admin/cord/voltservice/$id$/
+          kind: vOLT
+          replaces: service_volt
+
+    addresses_vsg:
+      type: tosca.nodes.AddressPool
+      properties:
+          addresses: 10.168.0.0/24
+          gateway_ip: 10.168.0.1
+          gateway_mac: 02:42:0a:a8:00:01
+
+    addresses_exampleservice-public:
+      type: tosca.nodes.AddressPool
+      properties:
+          addresses: 10.168.1.0/24
+          gateway_ip: 10.168.1.1
+          gateway_mac: 02:42:0a:a8:00:01
+
+    service#vsg:
+      type: tosca.nodes.VSGService
+      requirements:
+          - vrouter_tenant:
+              node: service#vrouter
+              relationship: tosca.relationships.TenantOfService
+      properties:
+          view_url: /admin/cord/vsgservice/$id$/
+          private_key_fn: /opt/xos/synchronizers/vcpe/vcpe_private_key
+          replaces: service_vsg
+
+    service#vrouter:
+      type: tosca.nodes.VRouterService
+      properties:
+          view_url: /admin/vrouter/vrouterservice/$id$/
+          replaces: service_vrouter
+      requirements:
+          - addresses_vsg:
+              node: addresses_vsg
+              relationship: tosca.relationships.ProvidesAddresses
+          - addresses_service1:
+              node: addresses_exampleservice-public
+              relationship: tosca.relationships.ProvidesAddresses
+
diff --git a/roles/cord-profile/templates/volt-devices.yaml.j2 b/roles/cord-profile/templates/volt-devices.yaml.j2
new file mode 100644
index 0000000..ec882c9
--- /dev/null
+++ b/roles/cord-profile/templates/volt-devices.yaml.j2
@@ -0,0 +1,46 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: created by platform-install/roles/cord-profile/templates/volt-devices.yaml.j2
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+
+# vOLT service defined in services.yaml
+    service#volt:
+      type: tosca.nodes.VOLTService
+      properties:
+          no-create: True
+          no-delete: True
+          no-update: True
+
+# vOLT devices
+{% for device in volt_devices %}
+    {{ device.name }}-{{ device.index | default(loop.index) }}:
+      type: tosca.nodes.VOLTDevice
+      properties:
+            driver: {{ device.driver | default("pmc-olt") }}
+            openflow_id: {{ device.openflow_id }}
+            access_devices: {{ device.access_devices }}
+      requirements:
+          - volt_service:
+              node: service#volt
+              relationship: tosca.relationships.MemberOfService
+          - access_agent:
+              node: {{ device.name }}-agent-{{ device.index | default(loop.index) }}
+              relationship: tosca.relationships.UsesAgent
+
+    {{ device.name }}-agent-{{ device.index | default(loop.index) }}:
+      type: tosca.nodes.AccessAgent
+      properties:
+          mac: {{ device.agent_mac }}
+          port_mappings: {{ device.agent_port_mappings }}
+      requirements:
+          - volt_service:
+              node: service#volt
+              relationship: tosca.relationships.MemberOfService
+
+{% endfor %}
+
diff --git a/roles/cord-profile/templates/vrouter.yaml.j2 b/roles/cord-profile/templates/vrouter.yaml.j2
new file mode 100644
index 0000000..cf53513
--- /dev/null
+++ b/roles/cord-profile/templates/vrouter.yaml.j2
@@ -0,0 +1,174 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Just enough Tosca to get the vSG slice running on the CORD POD
+
+imports:
+   - custom_types/xos.yaml
+   - custom_types/vrouter.yaml
+
+topology_template:
+  node_templates:
+
+    service#vrouter:
+      type: tosca.nodes.VRouterService
+      properties:
+          view_url: /admin/vrouter/
+          no-delete: true
+          no-create: true
+          rest_hostname: onos-fabric
+          rest_port: 8181
+          rest_user: onos
+          rest_pass: rocks
+    
+    device#of:00000000000000b1:
+      type: tosca.nodes.VRouterDevice
+      properties:
+        openflow_id: of:00000000000000b1
+        driver: softrouter
+        # config_key: basic
+      requirements:
+        - service#vrouter:
+            node: service#vrouter
+            relationship: tosca.relationships.MemberOfService
+
+    # Port 1
+    port#port1/1:
+      type: tosca.nodes.VRouterPort
+      properties:
+        openflow_id: of:00000000000000b1/1
+      requirements:
+        - device#of:00000000000000b1:
+            node: device#of:00000000000000b1
+            relationship: tosca.relationships.PortOfDevice
+        - service#vrouter:
+            node: service#vrouter
+            relationship: tosca.relationships.MemberOfService
+
+    interface#b1-1:
+      type: tosca.nodes.VRouterInterface
+      properties:
+        name: b1-1
+        mac: 00:00:00:00:00:01
+      requirements:
+        - port#port1/1:
+            node: port#port1/1
+            relationship: tosca.relationships.InterfaceOfPort
+
+    ips#10.0.1.2/24:
+      type: tosca.nodes.VRouterIp
+      properties:
+        ip: 10.0.1.2/24
+      requirements:
+        - interface#b1-1:
+            node: interface#b1-1
+            relationship: tosca.relationships.IpOfInterface
+
+    # Port 2
+    port#port1/2:
+      type: tosca.nodes.VRouterPort
+      properties:
+        openflow_id: of:00000000000000b1/2
+      requirements:
+        - device#of:00000000000000b1:
+            node: device#of:00000000000000b1
+            relationship: tosca.relationships.PortOfDevice
+        - service#vrouter:
+            node: service#vrouter
+            relationship: tosca.relationships.MemberOfService
+
+    interface#b1-2:
+      type: tosca.nodes.VRouterInterface
+      properties:
+        name: b1-2
+        mac: 00:00:00:00:00:01
+      requirements:
+        - port#port1/2:
+            node: port#port1/2
+            relationship: tosca.relationships.InterfaceOfPort
+
+    ips#10.0.2.2/24:
+      type: tosca.nodes.VRouterIp
+      properties:
+        ip: 10.0.2.2/24
+      requirements:
+        - interface#b1-1:
+            node: interface#b1-2
+            relationship: tosca.relationships.IpOfInterface
+
+    # Port 3
+    port#port1/3:
+      type: tosca.nodes.VRouterPort
+      properties:
+        openflow_id: of:00000000000000b1/3
+      requirements:
+        - device#of:00000000000000b1:
+            node: device#of:00000000000000b1
+            relationship: tosca.relationships.PortOfDevice
+        - service#vrouter:
+            node: service#vrouter
+            relationship: tosca.relationships.MemberOfService
+
+    interface#b1-3:
+      type: tosca.nodes.VRouterInterface
+      properties:
+        name: b1-3
+        mac: 00:00:00:00:00:01
+      requirements:
+        - port#port1/3:
+            node: port#port1/3
+            relationship: tosca.relationships.InterfaceOfPort
+
+    ips#10.0.3.2/24:
+      type: tosca.nodes.VRouterIp
+      properties:
+        ip: 10.0.3.2/24
+      requirements:
+        - interface#b1-1:
+            node: interface#b1-3
+            relationship: tosca.relationships.IpOfInterface
+
+    # Port 4
+    port#port1/4:
+      type: tosca.nodes.VRouterPort
+      properties:
+        openflow_id: of:00000000000000b1/4
+      requirements:
+        - device#of:00000000000000b1:
+            node: device#of:00000000000000b1
+            relationship: tosca.relationships.PortOfDevice
+        - service#vrouter:
+            node: service#vrouter
+            relationship: tosca.relationships.MemberOfService
+
+    interface#b1-4:
+      type: tosca.nodes.VRouterInterface
+      properties:
+        name: b1-4
+        mac: 00:00:00:00:00:01
+        vlan: 100
+      requirements:
+        - port#port1/4:
+            node: port#port1/4
+            relationship: tosca.relationships.InterfaceOfPort
+
+    ips#10.0.4.2/24:
+      type: tosca.nodes.VRouterIp
+      properties:
+        ip: 10.0.4.2/24
+      requirements:
+        - interface#b1-1:
+            node: interface#b1-4
+            relationship: tosca.relationships.IpOfInterface
+
+    app#vrouterApp:
+      type: tosca.nodes.VRouterApp
+      properties:
+        name: org.onosproject.router
+        # can we use a relation to specify the connect point port?
+        control_plane_connect_point: of:00000000000000b1/5
+        ospf_enabled: true
+      requirements:
+          - service#vrouter:
+              node: service#vrouter
+              relationship: tosca.relationships.MemberOfService
+
diff --git a/roles/cord-profile/templates/vtn-service.yaml.j2 b/roles/cord-profile/templates/vtn-service.yaml.j2
new file mode 100644
index 0000000..7dc5d30
--- /dev/null
+++ b/roles/cord-profile/templates/vtn-service.yaml.j2
@@ -0,0 +1,52 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+imports:
+   - custom_types/xos.yaml
+
+description: Configures the VTN ONOS service
+
+topology_template:
+  node_templates:
+
+    service#ONOS_CORD:
+      type: tosca.nodes.ONOSService
+      requirements:
+      properties:
+          kind: onos
+          view_url: /admin/onos/onosservice/$id$/
+          no_container: true
+          rest_hostname: onos-cord
+          rest_port: 8182
+          replaces: service_ONOS_CORD
+
+    service#vtn:
+      type: tosca.nodes.VTNService
+      properties:
+          view_url: /admin/vtn/vtnservice/$id$/
+          privateGatewayMac: 00:00:00:00:00:01
+          localManagementIp: {{ management_network_ip }}
+          ovsdbPort: 6641
+          sshUser: root
+          sshKeyFile: /root/node_key
+          sshPort: 22
+          xosEndpoint: http://xos:{{ xos_ui_port }}/
+          xosUser: {{ xos_admin_user }}
+          xosPassword: {{ xos_admin_pass }}
+          replaces: service_vtn
+          vtnAPIVersion: 2
+          controllerPort: onos-cord:6654
+
+    VTN_ONOS_app:
+      type: tosca.nodes.ONOSVTNApp
+      requirements:
+          - onos_tenant:
+              node: service#ONOS_CORD
+              relationship: tosca.relationships.TenantOfService
+          - vtn_service:
+              node: service#vtn
+              relationship: tosca.relationships.UsedByService
+      properties:
+          install_dependencies: http://mavenrepo:8080/repository/org/opencord/cord-config/{{ cord_app_version }}/cord-config-{{ cord_app_version }}.oar,http://mavenrepo:8080/repository/org/opencord/vtn/{{ cord_app_version }}/vtn-{{ cord_app_version }}.oar
+          dependencies: org.onosproject.drivers, org.onosproject.drivers.ovsdb, org.onosproject.openflow-base, org.onosproject.ovsdb-base, org.onosproject.dhcp
+          autogenerate: vtn-network-cfg
+
diff --git a/roles/cord-profile/templates/xos-bootstrap-docker-compose.yaml.j2 b/roles/cord-profile/templates/xos-bootstrap-docker-compose.yaml.j2
new file mode 100644
index 0000000..bf79d65
--- /dev/null
+++ b/roles/cord-profile/templates/xos-bootstrap-docker-compose.yaml.j2
@@ -0,0 +1,98 @@
+version: '2'
+
+# XOS bootstrap docker compose
+# generated by platform-install/roles/cord-profile
+
+networks:
+{% for network in xos_docker_networks %}
+  {{ network }}:
+    external: true
+{% endfor %}
+
+services:
+  xos_db:
+    image: xosproject/xos-postgres
+    networks:
+{% for network in xos_docker_networks %}
+      - {{ network }}
+{% endfor %}
+    expose:
+      - "5432"
+
+{% if use_redis %}
+  xos_redis:
+    image: redis
+    networks:
+{% for network in xos_docker_networks %}
+     - {{ network }}
+{% endfor %}
+    logging:
+      driver: "json-file"
+      options:
+        max-size: "1000k"
+        max-file: "5"
+{% endif %}
+
+  xos_bootstrap_ui:
+    image: xosproject/xos
+    command: python /opt/xos/manage.py runserver 0.0.0.0:{{ xos_bootstrap_ui_port }} --insecure --makemigrations
+    networks:
+{% for network in xos_docker_networks %}
+     - {{ network }}
+{% endfor %}
+    labels:
+      org.xosproject.kind: userinterface
+      org.xosproject.target: bootstrap
+    links:
+      - xos_db
+{% if use_redis %}
+      - xos_redis:redis
+{% endif %}
+    volumes:
+      - .:/opt/cord_profile:ro
+      - ./xos_common_config:/opt/xos/xos_configuration/xos_common_config:ro
+{% for service in xos_services %}
+      - {{ cord_dir }}/{{ service.path }}:/opt/xos_services/{{ service.path | basename }}:ro
+{% endfor %}
+{% for library in xos_libraries %}
+      - {{ cord_dir }}/orchestration/xos_libraries/{{ library }}:/opt/xos_libraries/{{ library }}:ro
+{% endfor %}
+{% for volume in xos_docker_volumes %}
+      - {{ volume.host }}:{{ volume.container }}{{ ":rw" if (volume.read_only is defined and not volume.read_only ) else ":ro" }}
+{% endfor %}
+    ports:
+      - "{{ xos_bootstrap_ui_port }}:{{ xos_bootstrap_ui_port }}"
+    logging:
+      driver: "json-file"
+      options:
+        max-size: "1000k"
+        max-file: "5"
+
+  xos_synchronizer_onboarding:
+    image: xosproject/xos
+    command: bash -c "cd /opt/xos/synchronizers/onboarding; ./run.sh"
+    networks:
+{% for network in xos_docker_networks %}
+     - {{ network }}
+{% endfor %}
+    labels:
+      org.xosproject.kind: synchronizer
+      org.xosproject.target: onboarding
+    links:
+      - xos_db
+    volumes:
+      - /var/run/docker.sock:/var/run/docker.sock
+      - ./key_import:/opt/xos/key_import:ro
+      - ./onboarding-docker-compose:/opt/xos/synchronizers/onboarding/docker-compose
+{% for service in xos_services %}
+      - {{ cord_dir }}/{{ service.path }}:/opt/xos_services/{{ service.path | basename }}:ro
+{% endfor %}
+{% for library in xos_libraries %}
+      - {{ cord_dir }}/orchestration/xos_libraries/{{ library }}:/opt/xos_libraries/{{ library }}:ro
+{% endfor %}
+    logging:
+      driver: "json-file"
+      options:
+        max-size: "1000k"
+        max-file: "5"
+
diff --git a/roles/cord-profile/templates/xos.yaml.j2 b/roles/cord-profile/templates/xos.yaml.j2
new file mode 100644
index 0000000..626660c
--- /dev/null
+++ b/roles/cord-profile/templates/xos.yaml.j2
@@ -0,0 +1,87 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Onboard XOS
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+
+    xos:
+      type: tosca.nodes.XOS
+      properties:
+        ui_port: {{ xos_ui_port }}
+        bootstrap_ui_port: {{ xos_bootstrap_ui_port }}
+        docker_project_name: {{ cord_profile | regex_replace('\W','') }}
+        db_container_name: {{ cord_profile | regex_replace('\W','') }}bs_xos_db_1
+{% if use_redis %}
+        redis_container_name: {{ cord_profile | regex_replace('\W','') }}bs_xos_redis_1
+{% endif %}
+{% if frontend_only is defined %}
+        frontend_only: {{ frontend_only }}
+{% endif %}
+{% if source_ui_image is defined %}
+        source_ui_image: {{ source_ui_image }}
+{% endif %}
+
+    /opt/xos/xos_configuration/xos_common_config:
+      type: tosca.nodes.XOSVolume
+      properties:
+          host_path: {{ cord_profile_dir }}/xos_common_config
+          read_only: True
+      requirements:
+          - xos:
+             node: xos
+             relationship: tosca.relationships.UsedByXOS
+
+    /opt/cord_profile:
+      type: tosca.nodes.XOSVolume
+      properties:
+          host_path: {{ cord_profile_dir }}
+          read_only: True
+      requirements:
+          - xos:
+             node: xos
+             relationship: tosca.relationships.UsedByXOS
+
+{% for library in xos_libraries %}
+    /opt/xos_libraries/{{ library }}:
+      type: tosca.nodes.XOSVolume
+      properties:
+          host_path: {{ cord_dir }}/orchestration/xos_libraries/{{ library }}
+          read_only: True
+      requirements:
+          - xos:
+             node: xos
+             relationship: tosca.relationships.UsedByXOS
+
+{% endfor %}
+
+
+{% for service in xos_services %}
+    /opt/xos_services/{{ service.path | basename }}:
+      type: tosca.nodes.XOSVolume
+      properties:
+          host_path: {{ cord_dir }}/{{ service.path }}
+          read_only: {{ service.read_only | default("True") }}
+      requirements:
+          - xos:
+             node: xos
+             relationship: tosca.relationships.UsedByXOS
+
+{% endfor %}
+
+{% for volume in xos_docker_volumes %}
+    {{ volume.container }}:
+      type: tosca.nodes.XOSVolume
+      properties:
+          host_path: {{ volume.host }}
+          read_only: {{ volume.read_only | default("True") }}
+      requirements:
+          - xos:
+             node: xos
+             relationship: tosca.relationships.UsedByXOS
+
+{% endfor %}
+
diff --git a/roles/cord-profile/templates/xos_common_config.j2 b/roles/cord-profile/templates/xos_common_config.j2
new file mode 100644
index 0000000..175be92
--- /dev/null
+++ b/roles/cord-profile/templates/xos_common_config.j2
@@ -0,0 +1,59 @@
+; xos_common_config
+; generated by platform-install/roles/cord-profile
+
+[plc]
+name=plc
+deployment=plc
+
+[db]
+name=xos
+user=postgres
+password=password
+host=xos_db
+port=5432
+
+[api]
+host=localhost
+port=8000
+ssl_key=None
+ssl_cert=None
+ca_ssl_cert=None
+ratelimit_enabled=0
+omf_enabled=0
+mail_support_address=support@localhost
+nova_enabled=True
+logfile=/var/log/xos.log
+
+[nova]
+admin_user=admin@domain.com
+admin_password=admin
+admin_tenant=admin
+url=http://localhost:5000/v2.0/
+default_image=None
+default_flavor=m1.small
+default_security_group=default
+ca_ssl_cert=/etc/ssl/certs/ca-certificates.crt
+
+[observer]
+pretend=False
+backoff_disabled=True
+images_directory=/opt/xos/images
+dependency_graph=/opt/xos/model-deps
+logfile=/var/log/xos_backend.log
+save_ansible_output=True
+node_key={{ cord_profile_dir }}/node_key
+
+[gui]
+disable_minidashboard={{ disable_minidashboard }}
+branding_name={{ gui_branding_name }}
+branding_icon={{ gui_branding_icon }}
+branding_favicon={{ gui_branding_favicon }}
+branding_bg={{ gui_branding_bg }}
+{% if gui_service_view_class %}
+service_view_class={{ gui_service_view_class }}
+{% endif %}
+
+{% if use_vtn %}
+[networking]
+use_vtn=True
+{% endif %}
diff --git a/roles/docker-install/defaults/main.yml b/roles/docker-install/defaults/main.yml
new file mode 100644
index 0000000..7ff3dde
--- /dev/null
+++ b/roles/docker-install/defaults/main.yml
@@ -0,0 +1,5 @@
+---
+# docker-install/defaults/main.yml
+
+docker_apt_repo: "deb https://apt.dockerproject.org/repo ubuntu-trusty main"
+
diff --git a/roles/docker-install/files/docker_apt_key.gpg b/roles/docker-install/files/docker_apt_key.gpg
new file mode 100644
index 0000000..f63466b
--- /dev/null
+++ b/roles/docker-install/files/docker_apt_key.gpg
@@ -0,0 +1,47 @@
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+Version: SKS 1.1.5
+
+mQINBFWln24BEADrBl5p99uKh8+rpvqJ48u4eTtjeXAWbslJotmC/CakbNSqOb9oddfzRvGV
+eJVERt/Q/mlvEqgnyTQy+e6oEYN2Y2kqXceUhXagThnqCoxcEJ3+KM4RmYdoe/BJ/J/6rHOj
+q7Omk24z2qB3RU1uAv57iY5VGw5p45uZB4C4pNNsBJXoCvPnTGAs/7IrekFZDDgVraPx/hdi
+wopQ8NltSfZCyu/jPpWFK28TR8yfVlzYFwibj5WKdHM7ZTqlA1tHIG+agyPf3Rae0jPMsHR6
+q+arXVwMccyOi+ULU0z8mHUJ3iEMIrpTX+80KaN/ZjibfsBOCjcfiJSB/acn4nxQQgNZigna
+32velafhQivsNREFeJpzENiGHOoyC6qVeOgKrRiKxzymj0FIMLru/iFF5pSWcBQB7PYlt8J0
+G80lAcPr6VCiN+4cNKv03SdvA69dCOj79PuO9IIvQsJXsSq96HB+TeEmmL+xSdpGtGdCJHHM
+1fDeCqkZhT+RtBGQL2SEdWjxbF43oQopocT8cHvyX6Zaltn0svoGs+wX3Z/H6/8P5anog43U
+65c0A+64Jj00rNDr8j31izhtQMRo892kGeQAaaxg4Pz6HnS7hRC+cOMHUU4HA7iMzHrouAdY
+eTZeZEQOA7SxtCME9ZnGwe2grxPXh/U/80WJGkzLFNcTKdv+rwARAQABtDdEb2NrZXIgUmVs
+ZWFzZSBUb29sIChyZWxlYXNlZG9ja2VyKSA8ZG9ja2VyQGRvY2tlci5jb20+iQIcBBABCgAG
+BQJWw7vdAAoJEFyzYeVS+w0QHysP/i37m4SyoOCVcnybl18vzwBEcp4VCRbXvHvOXty1gccV
+IV8/aJqNKgBV97lY3vrpOyiIeB8ETQegsrxFE7t/Gz0rsLObqfLEHdmn5iBJRkhLfCpzjeOn
+yB3Z0IJB6UogO/msQVYe5CXJl6uwr0AmoiCBLrVlDAktxVh9RWch0l0KZRX2FpHu8h+uM0/z
+ySqIidlYfLa3y5oHscU+nGU1i6ImwDTD3ysZC5jp9aVfvUmcESyAb4vvdcAHR+bXhA/RW8QH
+eeMFliWw7Z2jYHyuHmDnWG2yUrnCqAJTrWV+OfKRIzzJFBs4e88ru5h2ZIXdRepw/+COYj34
+LyzxR2cxr2u/xvxwXCkSMe7F4KZAphD+1ws61FhnUMi/PERMYfTFuvPrCkq4gyBjt3fFpZ2N
+R/fKW87QOeVcn1ivXl9id3MMs9KXJsg7QasT7mCsee2VIFsxrkFQ2jNpD+JAERRn9Fj4ArHL
+5TbwkkFbZZvSi6fr5h2GbCAXIGhIXKnjjorPY/YDX6X8AaHOW1zblWy/CFr6VFl963jrjJga
+g0G6tNtBZLrclZgWhOQpeZZ5Lbvz2ZA5CqRrfAVcwPNW1fObFIRtqV6vuVluFOPCMAAnOnqR
+02w9t17iVQjO3oVN0mbQi9vjuExXh1YoScVetiO6LSmlQfVEVRTqHLMgXyR/EMo7iQIcBBAB
+CgAGBQJXSWBlAAoJEFyzYeVS+w0QeH0QAI6btAfYwYPuAjfRUy9qlnPhZ+xt1rnwsUzsbmo8
+K3XTNh+l/R08nu0dsczw30Q1wju28fh1N8ay223+69f0+yICaXqR18AbGgFGKX7vo0gfEVax
+dItUN3eHNydGFzmeOKbAlrxIMECnSTG/TkFVYO9Ntlv9vSN2BupmTagTRErxLZKnVsWRzp+X
+elwlgU5BCZ6U6Ze8+bIc6F1bZstf17X8i6XNV/rOCLx2yP0hn1osoljoLPpW8nzkwvqYsYbC
+A28lMt1aqe0UWvRCqR0zxlKn17NZQqjbxcajEMCajoQ01MshmO5GWePViv2abCZ/iaC5zKqV
+T3deMJHLq7lum6qhA41E9gJH9QoqT+qgadheeFfoC1QP7cke+tXmYg2R39p3l5Hmm+JQbP4f
+9V5mpWExvHGCSbcatr35tnakIJZugq2ogzsm1djCSz9222RXl9OoFqsm1bNzA78+/cOt5N2c
+yhU0bM2T/zgh42YbDD+JDU/HSmxUIpU+wrGvZGM2FU/up0DRxOC4U1fL6HHlj8liNJWfEg3v
+hougOh66gGF9ik5j4eIlNoz6lst+gmvlZQ9/9hRDeoG+AbhZeIlQ4CCw+Y1j/+fUxIzKHPVK
++aFJd+oJVNvbojJW/SgDdSMtFwqOvXyYcHl30Ws0gZUeDyAmNGZeJ3kFklnApDmeKK+OiQI4
+BBMBAgAiBQJVpZ9uAhsvBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRD3YiFXLFJgnbRf
+EAC9Uai7Rv20QIDlDogRzd+Vebg4ahyoUdj0CH+nAk40RIoq6G26u1e+sdgjpCa8jF6vrx+s
+mpgd1HeJdmpahUX0XN3X9f9qU9oj9A4I1WDalRWJh+tP5WNv2ySy6AwcP9QnjuBMRTnTK27p
+k1sEMg9oJHK5p+ts8hlSC4SluyMKH5NMVy9c+A9yqq9NF6M6d6/ehKfBFFLG9BX+XLBATvf1
+ZemGVHQusCQebTGv0C0V9yqtdPdRWVIEhHxyNHATaVYOafTj/EF0lDxLl6zDT6trRV5n9F1V
+CEh4Aal8L5MxVPcIZVO7NHT2EkQgn8CvWjV3oKl2GopZF8V4XdJRl90U/WDv/6cmfI08GkzD
+YBHhS8ULWRFwGKobsSTyIvnbk4NtKdnTGyTJCQ8+6i52s+C54PiNgfj2ieNn6oOR7d+bNCcG
+1CdOYY+ZXVOcsjl73UYvtJrO0Rl/NpYERkZ5d/tzw4jZ6FCXgggA/Zxcjk6Y1ZvIm8Mt8wLR
+FH9Nww+FVsCtaCXJLP8DlJLASMD9rl5QS9Ku3u7ZNrr5HWXPHXITX660jglyshch6CWeiUAT
+qjIAzkEQom/kEnOrvJAtkypRJ59vYQOedZ1sFVELMXg2UCkD/FwojfnVtjzYaTCeGwFQeqzH
+mM241iuOmBYPeyTY5veF49aBJA1gEJOQTvBR8Q==
+=74V2
+-----END PGP PUBLIC KEY BLOCK-----
diff --git a/roles/docker-install/handlers/main.yml b/roles/docker-install/handlers/main.yml
new file mode 100644
index 0000000..697fec7
--- /dev/null
+++ b/roles/docker-install/handlers/main.yml
@@ -0,0 +1,9 @@
+---
+# docker-install/handlers/main.yml
+
+- name: docker-restart
+  become: yes
+  service:
+    name: docker
+    state: restarted
+
diff --git a/roles/docker-install/tasks/main.yml b/roles/docker-install/tasks/main.yml
new file mode 100644
index 0000000..826d5bb
--- /dev/null
+++ b/roles/docker-install/tasks/main.yml
@@ -0,0 +1,48 @@
+---
+# docker-install/tasks/main.yml
+# note - all tasks run with become to preserve the `ansible_user_id` var
+
+- name: Prereqs and SSL support for apt for SSL
+  become: yes
+  apt:
+    name={{ item }}
+    update_cache=yes
+    cache_valid_time=3600
+  with_items:
+    - apt-transport-https
+    - ca-certificates
+    - python-pip
+
+- name: Trust docker apt key
+  become: yes
+  apt_key:
+    data: "{{ lookup('file', 'docker_apt_key.gpg') }}"
+
+- name: Add docker apt repo
+  become: yes
+  apt_repository:
+    repo: "{{ docker_apt_repo }}"
+
+- name: Install docker
+  become: yes
+  apt:
+    update_cache=yes
+    cache_valid_time=3600
+    name=docker-engine
+
+# docker fails without docker-py, docker-compose >1.9 fails with docker-py installed
+- name: Install docker-compose and docker-py
+  become: yes
+  pip:
+    name: "{{ item }}"
+  with_items:
+    - docker-py
+    - docker-compose==1.9
+
+- name: Make current user part of the Docker group
+  become: yes
+  user:
+    name: "{{ ansible_user_id }}"
+    groups: "docker"
+    append: yes
+
diff --git a/roles/exampleservice-config/defaults/main.yml b/roles/exampleservice-config/defaults/main.yml
new file mode 100644
index 0000000..82098e0
--- /dev/null
+++ b/roles/exampleservice-config/defaults/main.yml
@@ -0,0 +1,6 @@
+---
+# exampleservice-config/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
diff --git a/roles/exampleservice-config/tasks/main.yml b/roles/exampleservice-config/tasks/main.yml
new file mode 100644
index 0000000..dc21c17
--- /dev/null
+++ b/roles/exampleservice-config/tasks/main.yml
@@ -0,0 +1,29 @@
+---
+# exampleservice-config/tasks/main.yml
+
+- name: Create fake/empty ssh keys if profile hasn't created them
+  copy:
+    remote_src: True # file is local to the remote machine
+    force: False # only copy if destination file doesn't exist
+    src: "/dev/null"
+    dest: "{{ cord_profile_dir }}/key_import/{{ item }}"
+    mode: 0600
+  with_items:
+    - exampleservice_rsa
+    - exampleservice_rsa.pub
+
+- name: Copy exampleservice onboarding TOSCA files to cord_profile
+  copy:
+    src: "{{ cord_dir }}/orchestration/xos_services/exampleservice/xos/exampleservice-onboard.yaml"
+    dest: "{{ cord_profile_dir }}/exampleservice-onboard.yaml"
+
+- name: TOSCA to mount exampleservice volume in XOS container
+  template:
+    src: "xos-exampleservice.yaml.j2"
+    dest: "{{ cord_profile_dir }}/xos-exampleservice.yaml"
+
+- name: TOSCA to create exampleservice test config
+  template:
+    src: "test-exampleservice.yaml.j2"
+    dest: "{{ cord_profile_dir }}/test-exampleservice.yaml"
+
diff --git a/roles/xos-install/templates/exampleservice.yaml.j2 b/roles/exampleservice-config/templates/test-exampleservice.yaml.j2
similarity index 100%
rename from roles/xos-install/templates/exampleservice.yaml.j2
rename to roles/exampleservice-config/templates/test-exampleservice.yaml.j2
diff --git a/roles/exampleservice-config/templates/xos-exampleservice.yaml.j2 b/roles/exampleservice-config/templates/xos-exampleservice.yaml.j2
new file mode 100644
index 0000000..441e075
--- /dev/null
+++ b/roles/exampleservice-config/templates/xos-exampleservice.yaml.j2
@@ -0,0 +1,24 @@
+---
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Have the XOS container mount the exampleservice volume
+
+imports:
+   - custom_types/xos.yaml
+
+topology_template:
+  node_templates:
+
+    xos:
+      type: tosca.nodes.XOS
+
+    /opt/xos_services/exampleservice:
+      type: tosca.nodes.XOSVolume
+      properties:
+          host_path: "{{ cord_dir }}/orchestration/xos_services/exampleservice"
+          read_only: True
+      requirements:
+          - xos:
+             node: xos
+             relationship: tosca.relationships.UsedByXOS
+
diff --git a/roles/exampleservice-onboard/defaults/main.yml b/roles/exampleservice-onboard/defaults/main.yml
new file mode 100644
index 0000000..463326e
--- /dev/null
+++ b/roles/exampleservice-onboard/defaults/main.yml
@@ -0,0 +1,8 @@
+---
+# exampleservice-onboard/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+xos_bootstrap_ui_port: 9001
+
diff --git a/roles/exampleservice-onboard/tasks/main.yml b/roles/exampleservice-onboard/tasks/main.yml
new file mode 100644
index 0000000..ba559d1
--- /dev/null
+++ b/roles/exampleservice-onboard/tasks/main.yml
@@ -0,0 +1,43 @@
+---
+# exampleservice-onboard/tasks/main.yml
+
+- name: Disable onboarding
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/disable-onboarding.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Have XOS container mount exampleservice volume
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/xos-exampleservice.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Onboard exampleservice
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/exampleservice-onboard.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Enable onboarding
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/enable-onboarding.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Wait for exampleservice to be onboarded
+  uri:
+    url: "http://localhost:{{ xos_bootstrap_ui_port }}/api/utility/onboarding/services/exampleservice/ready/"
+    method: GET
+    return_content: yes
+  register: xos_onboard_status
+  until: '"true" in xos_onboard_status.content'
+  retries: 60
+  delay: 2
+
+- name: Wait for XOS to be onboarded after exampleservice onboarding
+  uri:
+    url: "http://localhost:{{ xos_bootstrap_ui_port }}/api/utility/onboarding/xos/ready/"
+    method: GET
+    return_content: yes
+  register: xos_onboard_status
+  until: '"true" in xos_onboard_status.content'
+  retries: 60
+  delay: 2
+
diff --git a/roles/juju-compute-setup/tasks/main.yml b/roles/juju-compute-setup/tasks/main.yml
index 7473a06..d061697 100644
--- a/roles/juju-compute-setup/tasks/main.yml
+++ b/roles/juju-compute-setup/tasks/main.yml
@@ -68,8 +68,8 @@
   action: shell bash -c "source ~/admin-openrc.sh; nova hypervisor-list | grep '{{ item }}'"
   register: result
   until: result | success
-  retries: 5
-  delay: 5
+  retries: 10
+  delay: 10
   with_items: "{{ groups['compute'] }}"
   tags:
    - skip_ansible_lint # this really should be the os_server module, but ansible doesn't know about juju created openstack
diff --git a/roles/juju-setup/templates/opencloud_juju_config.yml.j2 b/roles/juju-setup/templates/opencloud_juju_config.yml.j2
index 564f28f..4379a9f 100644
--- a/roles/juju-setup/templates/opencloud_juju_config.yml.j2
+++ b/roles/juju-setup/templates/opencloud_juju_config.yml.j2
@@ -19,8 +19,6 @@
 
 mongodb: {}
 
-nagios: {}
-
 neutron-api:
   flat-network-providers: "*"
   openstack-origin: "cloud:trusty-kilo"
@@ -49,8 +47,6 @@
   config-flags: "firewall_driver=nova.virt.firewall.NoopFirewallDriver"
   openstack-origin: "cloud:trusty-kilo"
 
-nrpe: {}
-
 ntp:
   source: "0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org"
 
diff --git a/roles/maas-test-client-install/tasks/main.yml b/roles/maas-test-client-install/tasks/main.yml
index 5e4eb06..6b225b9 100644
--- a/roles/maas-test-client-install/tasks/main.yml
+++ b/roles/maas-test-client-install/tasks/main.yml
@@ -1,4 +1,6 @@
 ---
+# maas-test-client-install/tasks/main.yml
+
 - name: Create testclient LXD profile
   lxd_profile:
     name: testclient
@@ -49,3 +51,4 @@
   until: result | success
   retries: 3
   delay: 10
+
diff --git a/roles/platform-check/defaults/main.yml b/roles/platform-check/defaults/main.yml
index 666162d..16a8ef7 100644
--- a/roles/platform-check/defaults/main.yml
+++ b/roles/platform-check/defaults/main.yml
@@ -2,3 +2,7 @@
 # platform-check/defaults/main.yml
 
 onos_cord_dest: "{{ ansible_user_dir }}/onos-cord/"
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+xos_ui_port: 9000
+
diff --git a/roles/platform-check/tasks/main.yml b/roles/platform-check/tasks/main.yml
index 035945f..a67e837 100644
--- a/roles/platform-check/tasks/main.yml
+++ b/roles/platform-check/tasks/main.yml
@@ -20,11 +20,31 @@
   tags:
     - skip_ansible_lint
 
-- name: Tell XOS to refresh VTN configuration
+- name: Tell XOS to refresh VTN Service and compute nodes
   when: result | failed
-  make:
-    chdir: "{{ service_profile_repo_dest }}/{{ xos_configuration }}"
-    target: vtn
+  xostosca:
+    url: "http://xos.{{ site_suffix }}:{{ xos_ui_port }}/api/utility/tosca/run/"
+    user: "{{ xos_admin_user }}"
+    password:  "{{ xos_admin_pass }}"
+    recipe: "{{ lookup('file', cord_profile_dir + '/' + item ) }}"
+  with_items:
+    - openstack.yaml
+    - openstack-compute.yaml
+    - vtn-service.yaml
+
+- name: Pause to work around race in VTN or ONOS synchronizers
+  pause:
+    seconds: 20
+
+- name: Enable VTN for OpenStack Compute nodes
+  when: result | failed
+  xostosca:
+    url: "http://xos.{{ site_suffix }}:{{ xos_ui_port }}/api/utility/tosca/run/"
+    user: "{{ xos_admin_user }}"
+    password:  "{{ xos_admin_pass }}"
+    recipe: "{{ lookup('file', cord_profile_dir + '/' + item ) }}"
+  with_items:
+    - openstack-compute-vtn.yaml
 
 - name: Ensure br-int exists on all compute nodes (check VTN #2)
   when: result | failed
diff --git a/roles/repo/defaults/main.yml b/roles/repo/defaults/main.yml
new file mode 100644
index 0000000..b2af59f
--- /dev/null
+++ b/roles/repo/defaults/main.yml
@@ -0,0 +1,21 @@
+---
+# repo/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+repo_dl_url: "https://storage.googleapis.com/git-repo-downloads/repo"
+
+# This is for repo v1.23, and will change, as repo_dl_url unfortunately lacks a version...
+repo_checksum: "sha256:e147f0392686c40cfd7d5e6f332c6ee74c4eab4d24e2694b3b0a0c037bf51dc5"
+
+repo_manifest_url: "https://gerrit.opencord.org/manifest"
+
+# Used to download specific gerrit changesets. Syntax is:
+#
+# gerrit_changsets:
+#  - path: build/platform-install
+#    revision: 2934/19
+#  - path: ....
+#    revision: #/#
+#
+gerrit_changesets: []
+
diff --git a/roles/repo/tasks/main.yml b/roles/repo/tasks/main.yml
new file mode 100644
index 0000000..779e406
--- /dev/null
+++ b/roles/repo/tasks/main.yml
@@ -0,0 +1,38 @@
+---
+# repo/tasks/main.yml
+
+- name: Download and install repo tool
+  become: yes
+  get_url:
+    url: "{{ repo_dl_url }}"
+    checksum: "{{ repo_checksum }}"
+    dest: "/usr/local/bin/repo"
+    owner: root
+    group: root
+    mode: 0755
+
+- name: Create CORD directory
+  file:
+    dest: "{{ cord_dir }}"
+    state: directory
+
+- name: Init CORD repos (master branch) using repo
+  command: "/usr/local/bin/repo init -u {{ repo_manifest_url }} -b master -g build,onos,orchestration"
+  args:
+    chdir: "{{ cord_dir }}"
+    creates: "{{ cord_dir }}/.repo"
+
+- name: Synchronize CORD repos using repo
+  command: "repo sync"
+  args:
+    chdir: "{{ cord_dir }}"
+    creates: "{{ cord_dir }}/build"
+
+- name: Download specific gerrit changesets using repo
+  command: "/usr/local/bin/repo download {{ item.path }} {{ item.revision }}"
+  args:
+    chdir: "{{ cord_dir }}"
+  with_items: "{{ gerrit_changesets }}"
+  tags:
+    - skip_ansible_lint # usually won't be run, except during dev
+
diff --git a/roles/teardown-profile/defaults/main.yml b/roles/teardown-profile/defaults/main.yml
new file mode 100644
index 0000000..42ace11
--- /dev/null
+++ b/roles/teardown-profile/defaults/main.yml
@@ -0,0 +1,10 @@
+---
+# teardown-profile/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+xos_docker_networks:
+  - "xos"
+
diff --git a/roles/teardown-profile/tasks/main.yml b/roles/teardown-profile/tasks/main.yml
new file mode 100644
index 0000000..3c9bbea
--- /dev/null
+++ b/roles/teardown-profile/tasks/main.yml
@@ -0,0 +1,37 @@
+---
+# teardown-profile/tasks/main.yml
+# Destroys the currently created profile
+# NOTE: ignoring errors so that incomplete builds can be removed
+
+- name: Stop and remove XOS containers
+  docker_service:
+    project_name: "{{ cord_profile | regex_replace('\\W','') }}"
+    project_src: "{{ cord_profile_dir }}/onboarding-docker-compose/"
+    state: absent
+    remove_images: local
+  ignore_errors: yes
+
+- name: Stop and remove XOS bootstrap containers
+  docker_service:
+    project_name: "{{ cord_profile | regex_replace('\\W','') }}bs"
+    project_src: "{{ cord_profile_dir }}"
+    files: "xos-bootstrap-docker-compose.yaml"
+    state: absent
+    remove_images: local
+  ignore_errors: yes
+
+# need to remove images using docker_images here?
+
+- name: Remove docker networks
+  docker_network:
+    name: "{{ item }}"
+    state: absent
+  with_items: "{{ xos_docker_networks }}"
+  ignore_errors: yes
+
+- name: Remove the cord_profile directory
+  file:
+    path: "{{ cord_profile_dir }}"
+    state: absent
+  ignore_errors: yes
+
diff --git a/roles/test-exampleservice/defaults/main.yml b/roles/test-exampleservice/defaults/main.yml
new file mode 100644
index 0000000..04b252e
--- /dev/null
+++ b/roles/test-exampleservice/defaults/main.yml
@@ -0,0 +1,6 @@
+---
+# test-exampleservice/defaults/main.yml
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+xos_ui_port: 9000
+
diff --git a/roles/test-exampleservice/tasks/main.yml b/roles/test-exampleservice/tasks/main.yml
index 331dfcf..12bf2af 100644
--- a/roles/test-exampleservice/tasks/main.yml
+++ b/roles/test-exampleservice/tasks/main.yml
@@ -1,12 +1,13 @@
 ---
 # test-examplservice/tasks/main.yml
-#
 # Run tests to check that the single-node deployment has worked
 
-- name: Onboard ExampleService and instantiate a VM
-  make:
-    chdir: "{{ service_profile_repo_dest }}/{{ xos_configuration }}"
-    target: exampleservice
+- name: Load TOSCA to apply test config for ExampleService, over REST
+  xostosca:
+    url: "http://xos.{{ site_suffix }}:{{ xos_ui_port }}/api/utility/tosca/run/"
+    user: "{{ xos_admin_user }}"
+    password:  "{{ xos_admin_pass }}"
+    recipe: "{{ lookup('file', cord_profile_dir + '/test-exampleservice.yaml' ) }}"
 
 - name: Wait for ExampleService VM to come up
   shell: bash -c "source ~/admin-openrc.sh; nova list --all-tenants|grep 'exampleservice.*ACTIVE' > /dev/null"
@@ -82,3 +83,4 @@
 
 - name: Output from curl test
   debug: var=curltest.stdout_lines
+
diff --git a/roles/test-subscriber-config/defaults/main.yml b/roles/test-subscriber-config/defaults/main.yml
new file mode 100644
index 0000000..3fbf456
--- /dev/null
+++ b/roles/test-subscriber-config/defaults/main.yml
@@ -0,0 +1,5 @@
+---
+# test-subscriber-config/defaults/main.yml
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
diff --git a/roles/test-subscriber-config/tasks/main.yml b/roles/test-subscriber-config/tasks/main.yml
new file mode 100644
index 0000000..46c1642
--- /dev/null
+++ b/roles/test-subscriber-config/tasks/main.yml
@@ -0,0 +1,10 @@
+---
+# test-subscriber/tasks/main.yml
+
+- name: Create test-subscriber.yaml TOSCA config
+  template:
+    src: test-subscriber.yaml.j2
+    dest: "{{ cord_profile_dir }}/test-subscriber.yaml"
+    owner: "{{ ansible_user_id }}"
+    mode: 0644
+
diff --git a/roles/xos-install/templates/cord-test-subscriber.yaml.j2 b/roles/test-subscriber-config/templates/test-subscriber.yaml.j2
similarity index 100%
rename from roles/xos-install/templates/cord-test-subscriber.yaml.j2
rename to roles/test-subscriber-config/templates/test-subscriber.yaml.j2
diff --git a/roles/test-subscriber-enable/tasks/main.yml b/roles/test-subscriber-enable/tasks/main.yml
new file mode 100644
index 0000000..eb7b66e
--- /dev/null
+++ b/roles/test-subscriber-enable/tasks/main.yml
@@ -0,0 +1,8 @@
+---
+# test-subscriber-enable/tasks/main.yml
+
+- name: Run TOSCA to add test-subscriber
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/test-subscriber.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
diff --git a/roles/test-vsg/tasks/main.yml b/roles/test-vsg/tasks/main.yml
index 4a44e7b..5e029f7 100644
--- a/roles/test-vsg/tasks/main.yml
+++ b/roles/test-vsg/tasks/main.yml
@@ -3,11 +3,6 @@
 #
 # Run tests to check that the CORD-in-a-Box deployment has worked.
 
-- name: Create cord subscriber
-  make:
-    chdir: "{{ service_profile_repo_dest }}/{{ xos_configuration }}"
-    target: cord-subscriber
-
 - name: Get name of compute node
   shell: bash -c "source ~/admin-openrc.sh; nova service-list|grep nova-compute|cut -d '|' -f 3"
   register: node_name
@@ -69,3 +64,4 @@
 
 - name: Output from ping test
   debug: var=pingtest.stdout_lines
+
diff --git a/roles/tosca-tests/tasks/main.yml b/roles/tosca-tests/tasks/main.yml
new file mode 100644
index 0000000..f244f5d
--- /dev/null
+++ b/roles/tosca-tests/tasks/main.yml
@@ -0,0 +1,19 @@
+---
+# tosca-tests/tasks/main.yml
+
+- name: Run TOSCA tests
+  command: "python ./alltests.py"
+  args:
+    chdir: "/opt/xos/tosca/tests"
+  register: tosca_tests_out
+  ignore_errors: yes
+  tags:
+    - skip_ansible_lint # run during testing only
+
+- name: Save output from TOSCA tests
+  copy:
+    content: "{{ tosca_tests_out.stdout_lines }}"
+    dest: "/tmp/tosca-tests.out"
+
+- name: Print output from TOSCA test
+  debug: var=tosca_tests_out.stdout_lines
diff --git a/roles/xos-bootstrap-hosts/defaults/main.yml b/roles/xos-bootstrap-hosts/defaults/main.yml
new file mode 100644
index 0000000..6b20046
--- /dev/null
+++ b/roles/xos-bootstrap-hosts/defaults/main.yml
@@ -0,0 +1,7 @@
+---
+# xos-bootstrap-hosts/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
diff --git a/roles/xos-bootstrap-hosts/tasks/main.yml b/roles/xos-bootstrap-hosts/tasks/main.yml
new file mode 100644
index 0000000..32a589c
--- /dev/null
+++ b/roles/xos-bootstrap-hosts/tasks/main.yml
@@ -0,0 +1,20 @@
+---
+# xos-bootstrap-hosts/tasks/main.yml
+
+- name: Get the Docker container names for bootstrap containers
+  docker_service:
+    project_name: "{{ cord_profile | regex_replace('\\W','') }}bs"
+    project_src: "{{ cord_profile_dir }}"
+    files: "xos-bootstrap-docker-compose.yaml"
+    recreate: never
+  register: xos_bootstrap_out
+
+- name: Add the containers to Ansible groups on a per-container type basis
+  add_host:
+    name: "{{ xos_bootstrap_out.ansible_facts[item].keys() | first }}"
+    groups: "{{ item }}"
+    ansible_connection: "docker"
+    cord_profile: "{{ cord_profile }}"
+    ansible_ssh_user: "root"
+  with_items: "{{ xos_bootstrap_out.ansible_facts.keys() | list }}"
+
diff --git a/roles/xos-bootstrap/defaults/main.yml b/roles/xos-bootstrap/defaults/main.yml
new file mode 100644
index 0000000..ada7671
--- /dev/null
+++ b/roles/xos-bootstrap/defaults/main.yml
@@ -0,0 +1,8 @@
+---
+# xos-bootstrap/defaults/main.yml
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
+xos_docker_networks:
+  - "xos"
+
diff --git a/roles/xos-bootstrap/tasks/main.yml b/roles/xos-bootstrap/tasks/main.yml
new file mode 100644
index 0000000..109d9ee
--- /dev/null
+++ b/roles/xos-bootstrap/tasks/main.yml
@@ -0,0 +1,15 @@
+---
+# xos-bootstrap/tasks/main.yml
+
+- name: Create docker networks
+  docker_network:
+    name: "{{ item }}"
+  with_items: "{{ xos_docker_networks }}"
+
+- name: Start XOS bootstrap containers
+  docker_service:
+    project_name: "{{ cord_profile | regex_replace('\\W','') }}bs"
+    project_src: "{{ cord_profile_dir }}"
+    files: "xos-bootstrap-docker-compose.yaml"
+  register: xos_bootstrap_out
+
diff --git a/roles/xos-clear-db/files/xos_clear_db.sql b/roles/xos-clear-db/files/xos_clear_db.sql
new file mode 100644
index 0000000..879cbc7
--- /dev/null
+++ b/roles/xos-clear-db/files/xos_clear_db.sql
@@ -0,0 +1,24 @@
+-- Clear the XOS database (used for testing)
+
+CREATE OR REPLACE FUNCTION truncate_tables(username IN VARCHAR) RETURNS void AS $$
+DECLARE
+  statements CURSOR FOR
+    SELECT tablename FROM pg_tables
+    WHERE tableowner = username AND schemaname = 'public';
+BEGIN
+  FOR stmt IN statements LOOP
+    EXECUTE 'TRUNCATE TABLE ' || quote_ident(stmt.tablename) || ' CASCADE;';
+  END LOOP;
+END;
+$$ LANGUAGE plpgsql;
+
+SELECT truncate_tables('postgres');
+
+SELECT setval('core_tenant_id_seq', 1);
+
+SELECT setval('core_deployment_id_seq', 1);
+
+SELECT setval('core_flavor_id_seq', 1);
+
+SELECT setval('core_service_id_seq', 1);
+
diff --git a/roles/xos-clear-db/tasks/main.yml b/roles/xos-clear-db/tasks/main.yml
new file mode 100644
index 0000000..3fb46a4
--- /dev/null
+++ b/roles/xos-clear-db/tasks/main.yml
@@ -0,0 +1,13 @@
+---
+# xos-clear-db/tasks/main.yml
+
+- name: Copy over database cleanup script
+  copy:
+    src: xos_clear_db.sql
+    dest: /tmp/xos_clear_db.sql
+
+- name: Run database cleanup script
+  command: "psql -U postgres -d xos -f /tmp/xos_clear_db.sql"
+  tags:
+    - skip_ansible_lint # test scenario destructive script
+
diff --git a/roles/xos-compute-setup/tasks/main.yml b/roles/xos-compute-setup/tasks/main.yml
deleted file mode 100644
index d8de0ed..0000000
--- a/roles/xos-compute-setup/tasks/main.yml
+++ /dev/null
@@ -1,20 +0,0 @@
----
-# xos-compute-setup/tasks/main.yml
-#
-# Tell XOS that a new compute node has been added
-
-- name: Create nodes/vtn TOSCA config
-  template:
-    src: "{{ item }}.j2"
-    dest: "{{ service_profile_repo_dest }}/{{ xos_configuration }}/{{ item }}"
-    owner: "{{ ansible_user_id }}"
-    mode: 0644
-  with_items:
-    - vtn.yaml
-    - nodes.yaml
-
-- name: Rebuild VTN configuration with new nodes block
-  make:
-    chdir: "{{ service_profile_repo_dest }}/{{ xos_configuration }}"
-    target: vtn
-
diff --git a/roles/xos-compute-setup/templates/vtn.yaml.j2 b/roles/xos-compute-setup/templates/vtn.yaml.j2
deleted file mode 100644
index f162609..0000000
--- a/roles/xos-compute-setup/templates/vtn.yaml.j2
+++ /dev/null
@@ -1,103 +0,0 @@
-tosca_definitions_version: tosca_simple_yaml_1_0
-
-imports:
-   - custom_types/xos.yaml
-
-description: autogenerated node tags file for VTN configuration
-
-topology_template:
-  node_templates:
-
-    service#ONOS_CORD:
-      type: tosca.nodes.ONOSService
-      requirements:
-      properties:
-          kind: onos
-          view_url: /admin/onos/onosservice/$id$/
-          no_container: true
-          rest_hostname: onos-cord
-          rest_port: 8182
-          replaces: service_ONOS_CORD
-
-    service#vtn:
-      type: tosca.nodes.VTNService
-      properties:
-          view_url: /admin/vtn/vtnservice/$id$/
-          privateGatewayMac: 00:00:00:00:00:01
-          localManagementIp: {{ management_network_ip }}
-          ovsdbPort: 6641
-          sshUser: root
-          sshKeyFile: /root/node_key
-          sshPort: 22
-          xosEndpoint: http://xos:8888/
-          xosUser: padmin@vicci.org
-          xosPassword: letmein
-          replaces: service_vtn
-          vtnAPIVersion: 2
-          controllerPort: onos-cord:6654
-
-{% for node in groups["compute"] %}
-{% if 'ipv4' in hostvars[node]['ansible_fabric'] %}
-
-    {{ hostvars[node]['ansible_hostname'] }}:
-      type: tosca.nodes.Node
-
-    # VTN bridgeId field for node {{ hostvars[node]['ansible_hostname'] }}
-    {{ hostvars[node]['ansible_hostname'] }}_bridgeId_tag:
-      type: tosca.nodes.Tag
-      properties:
-          name: bridgeId
-          value: of:0000{{ hostvars[node]['ansible_fabric']['macaddress'] | hwaddr('bare') }}
-      requirements:
-          - target:
-              node: {{ hostvars[node]['ansible_hostname'] }}
-              relationship: tosca.relationships.TagsObject
-          - service:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.MemberOfService
-
-    # VTN dataPlaneIntf field for node {{ hostvars[node]['ansible_hostname'] }}
-    {{ hostvars[node]['ansible_hostname'] }}_dataPlaneIntf_tag:
-      type: tosca.nodes.Tag
-      properties:
-          name: dataPlaneIntf
-          value: fabric
-      requirements:
-          - target:
-              node: {{ hostvars[node]['ansible_hostname'] }}
-              relationship: tosca.relationships.TagsObject
-          - service:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.MemberOfService
-
-    # VTN dataPlaneIp field for node {{ hostvars[node]['ansible_hostname'] }}
-    {{ hostvars[node]['ansible_hostname'] }}_dataPlaneIp_tag:
-      type: tosca.nodes.Tag
-      properties:
-          name: dataPlaneIp
-          value: {{ ( hostvars[node]['ansible_fabric']['ipv4']['address'] ~ '/' ~ hostvars[node]['ansible_fabric']['ipv4']['netmask'] ) | ipaddr('cidr') }}
-      requirements:
-          - target:
-              node: {{ hostvars[node]['ansible_hostname'] }}
-              relationship: tosca.relationships.TagsObject
-          - service:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.MemberOfService
-
-{% endif %}
-{% endfor %}
-
-    VTN_ONOS_app:
-      type: tosca.nodes.ONOSVTNApp
-      requirements:
-          - onos_tenant:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.TenantOfService
-          - vtn_service:
-              node: service#vtn
-              relationship: tosca.relationships.UsedByService
-      properties:
-          install_dependencies: http://mavenrepo:8080/repository/org/opencord/cord-config/{{ cord_app_version}}/cord-config-{{ cord_app_version }}.oar,http://mavenrepo:8080/repository/org/opencord/vtn/{{ cord_app_version }}/vtn-{{ cord_app_version }}.oar
-          dependencies: org.onosproject.drivers, org.onosproject.drivers.ovsdb, org.onosproject.openflow-base, org.onosproject.ovsdb-base, org.onosproject.dhcp
-          autogenerate: vtn-network-cfg
-
diff --git a/roles/xos-config/defaults/main.yml b/roles/xos-config/defaults/main.yml
new file mode 100644
index 0000000..c610f28
--- /dev/null
+++ b/roles/xos-config/defaults/main.yml
@@ -0,0 +1,6 @@
+---
+# xos-config/defaults/main.yml
+
+xos_admin_user: "xosadmin@opencord.org"
+
+xos_tosca_config_templates: []
diff --git a/roles/xos-config/tasks/main.yml b/roles/xos-config/tasks/main.yml
index 9898d3e..2e5f9cc 100644
--- a/roles/xos-config/tasks/main.yml
+++ b/roles/xos-config/tasks/main.yml
@@ -1,15 +1,9 @@
 ---
-# xos-head-start/tasks/main.yml
+# xos-config/tasks/main.yml
 
-# Performs any configuration of XOS that should be done right before starting
-# XOS. This includes copying the admin-openrc.sh, since we had to wait for juju
-# to finish before admin-openrc.sh was present.
-
-- name: Copy admin-openrc.sh to service-profile
-#  command: cp ~/admin-openrc.sh {{ service_profile_repo_dest }}/{{ xos_configuration }}
-  copy:
-      remote_src=True
-      src=~/admin-openrc.sh
-      dest={{ service_profile_repo_dest }}/{{ xos_configuration }}
-
+- name: Configure XOS with profile specific TOSCA
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/{{ item }}"
+  with_items: "{{ xos_tosca_config_templates }}"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
 
diff --git a/roles/xos-docker-images/defaults/main.yml b/roles/xos-docker-images/defaults/main.yml
new file mode 100644
index 0000000..22943b2
--- /dev/null
+++ b/roles/xos-docker-images/defaults/main.yml
@@ -0,0 +1,14 @@
+---
+# xos-docker-images/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+
+build_xos_base_image: False
+build_xos_test_image: False
+
+deploy_docker_registry: "localhost:5000"
+deploy_docker_tag: "latest"
+
+push_xos_base_image: False
+push_xos_image: False
+
diff --git a/roles/xos-docker-images/tasks/main.yml b/roles/xos-docker-images/tasks/main.yml
new file mode 100644
index 0000000..4482aac
--- /dev/null
+++ b/roles/xos-docker-images/tasks/main.yml
@@ -0,0 +1,59 @@
+---
+# xos-docker-images/tasks/main.yml
+
+- name: Build xos-base docker image
+  when: build_xos_base_image
+  docker_image:
+    name: "xosproject/xos-base"
+    path: "{{ cord_dir }}/orchestration/xos/containers/xos"
+    dockerfile: "Dockerfile.base"
+
+- name: Pull xos-base docker image from Dockerhub
+  when: not build_xos_base_image
+  docker_image:
+    name: "xosproject/xos-base"
+
+- name: Obtain XOS git repo metadata
+  command: "git log --pretty=format:'{\"XOS_GIT_COMMIT_DATE\":\"%ci\", \"XOS_GIT_COMMIT_HASH\":\"%H\"}' -n 1"
+  args:
+    chdir: "{{ cord_dir }}/orchestration/xos/"
+  register: xos_git_metadata
+  tags:
+    - skip_ansible_lint # idempotent git metadata retrieval, git module can't do this
+
+- name: Copy over SSL CA certificates
+  copy:
+    src: "{{ playbook_dir }}/pki/intermediate_ca/certs/im_cert_chain.pem"
+    dest: "{{ cord_dir }}/orchestration/xos/containers/xos/local_certs.crt"
+    mode: 0644
+
+- name: Build xosproject/xos devel image
+  docker_image:
+    name: "xosproject/xos"
+    path: "{{ cord_dir }}/orchestration/xos/"
+    dockerfile: "containers/xos/Dockerfile.devel"
+    buildargs: "{{ xos_git_metadata.stdout }}"
+    pull: False # should use locally created, or already pulled xos-base image
+
+- name: Build xosproject/xos-test testing image
+  when: build_xos_test_image
+  docker_image:
+    name: "xosproject/xos-test"
+    path: "{{ cord_dir }}/orchestration/xos/"
+    dockerfile: "containers/xos/Dockerfile.test"
+    pull: False # use the locally built copy of xosproject/xos
+
+- name: Tag and push xos-base image to docker registry
+  when: push_xos_base_image
+  docker_image:
+    name: "{{ deploy_docker_registry }}/xosproject/xos-base"
+    tag: "{{ deploy_docker_tag }}"
+    push: yes
+
+- name: Tag and push xos image to docker registry
+  when: push_xos_image
+  docker_image:
+    name: "{{ deploy_docker_registry }}/xosproject/xos"
+    tag: "{{ deploy_docker_tag }}"
+    push: True
+
diff --git a/roles/xos-install/defaults/main.yml b/roles/xos-install/defaults/main.yml
deleted file mode 100644
index fe04cec..0000000
--- a/roles/xos-install/defaults/main.yml
+++ /dev/null
@@ -1,18 +0,0 @@
----
-# default variables for xos-install role
-
-xos_repo_url: "https://gerrit.opencord.org/xos"
-xos_repo_dest: "{{ ansible_user_dir }}/xos"
-xos_repo_branch: "HEAD"
-
-xos_configuration: "devel"
-
-service_profile_repo_url: "https://gerrit.opencord.org/p/service-profile.git"
-service_profile_repo_dest: "{{ ansible_user_dir }}/service-profile"
-service_profile_repo_branch: "HEAD"
-
-docker_tag: "latest"
-docker_registry: "docker.io"
-local_docker_registry: "docker-registry:5000"
-
-cord_dest_dir: "/opt/cord"
diff --git a/roles/xos-install/tasks/main.yml b/roles/xos-install/tasks/main.yml
deleted file mode 100644
index 846d1a5..0000000
--- a/roles/xos-install/tasks/main.yml
+++ /dev/null
@@ -1,209 +0,0 @@
----
-# tasks for xos-install role
-
-- name: Install prerequisites
-  apt:
-    name={{ item }}
-    update_cache=yes
-    cache_valid_time=3600
-  become: yes
-  with_items:
-   - git
-   - make
-   - curl
-   - python-novaclient
-   - python-neutronclient
-   - python-keystoneclient
-   - python-glanceclient
-
-# ---- copy repos from the dev machine to the head node ----
-# note: this happens in the `cord` repo now
-
-# - name: Create cord destination directory
-#   become: yes
-#   file:
-#     path: "{{ cord_dest_dir }}"
-#     state: directory
-#     mode: 0755
-#     owner: "{{ ansible_user_id }}"
-
-# - name: Copy the whole repo tree
-#   synchronize:
-#       src: "{{ playbook_dir }}/../../../cord/"
-#       dest: "{{ cord_dest_dir }}/"
-
-- name: Create directory xos_services
-  file:
-    path: "{{ ansible_user_dir }}/xos_services"
-    state: directory
-    mode: 0755
-
-- name: Create directory xos_libraries
-  file:
-    path: "{{ ansible_user_dir }}/xos_libraries"
-    state: directory
-    mode: 0755
-
-- name: Create bindings to service-profile and xos
-  become: yes
-  mount:
-      src: "{{ cord_dest_dir }}/orchestration/{{ item }}"
-      name: "{{ ansible_user_dir }}/{{ item }}"
-      fstype: none
-      opts: rw,bind
-      state: mounted
-  with_items:
-      - service-profile
-      - xos
-
-- name: Create bindings for xos services
-  become: yes
-  mount:
-      src: "{{ cord_dest_dir }}/orchestration/xos_services/{{ item }}"
-      name: "{{ ansible_user_dir }}/xos_services/{{ item }}"
-      fstype: none
-      opts: rw,bind
-      state: mounted
-  with_items:
-      - exampleservice
-      - fabric
-      - globalxos
-      - hypercache
-      - metro-net
-      - monitoring
-      - onos-service
-      - openstack
-      - vrouter
-      - vsg
-      - vtr
-
-- name: Create bindings for xos services that reside in onos
-  become: yes
-  mount:
-      src: "{{ cord_dest_dir }}/onos-apps/apps/{{ item }}"
-      name: "{{ ansible_user_dir }}/xos_services/{{ item }}"
-      fstype: none
-      opts: rw,bind
-      state: mounted
-  with_items:
-      - vtn
-      - olt
-
-- name: Create bindings for xos libraries
-  become: yes
-  mount:
-      src: "{{ cord_dest_dir }}/orchestration/xos_libraries/{{ item }}"
-      name: "{{ ansible_user_dir }}/xos_libraries/{{ item }}"
-      fstype: none
-      opts: rw,bind
-      state: mounted
-  with_items:
-      - ng-xos-lib
-
-# ----  alternatively, check out repos from Internet ---
-
-- name: Clone service-profile repo
-  git:
-    repo={{ service_profile_repo_url }}
-    dest={{ service_profile_repo_dest }}
-    version={{ service_profile_repo_branch }}
-    force=yes
-  when:
-    False
-
-# ----  install keys ----
-
-- name: Copy over SSH keys
-  command: cp ~/.ssh/{{ item }} {{ service_profile_repo_dest }}/{{ xos_configuration }}/
-  with_items:
-   - id_rsa
-   - id_rsa.pub
-  tags:
-    - skip_ansible_lint
-
-- name: Copy over node key
-  command: cp {{ ansible_user_dir }}/node_key {{ service_profile_repo_dest }}/{{ xos_configuration }}/
-  tags:
-    - skip_ansible_lint
-
-- name: Set ownership and permissions of keys
-  file:
-    path={{ service_profile_repo_dest }}/{{ xos_configuration }}/{{ item }}
-    owner={{ ansible_user_id }}
-#    mode=0600
-  with_items:
-   - id_rsa
-   - id_rsa.pub
-   - node_key
-
-- name: Copy over core api key
-  copy:
-    src: "{{ playbook_dir }}/pki/intermediate_ca/private/xos-core.{{ site_suffix }}_key.pem"
-    dest: "{{ service_profile_repo_dest }}/{{ xos_configuration }}/core_api_key.pem"
-    mode: 0600
-
-- name: Copy over core api cert
-  copy:
-    src: "{{ playbook_dir }}/pki/intermediate_ca/certs/xos-core.{{ site_suffix }}_cert_chain.pem"
-    dest: "{{ service_profile_repo_dest }}/{{ xos_configuration }}/core_api_cert.pem"
-
-- name: Create templated TOSCA files
-  template:
-    src: "{{ item }}.j2"
-    dest: "{{ service_profile_repo_dest }}/{{ xos_configuration }}/{{ item }}"
-  with_items: "{{ xos_tosca_templates }}"
-
-- name: Download Glance VM images
-  get_url:
-    url={{ item.url }}
-    checksum={{ item.checksum }}
-    dest={{ service_profile_repo_dest }}/{{ xos_configuration }}/images/{{ item.name }}.qcow2
-  with_items: "{{ xos_images }}"
-
-# ---- pull docker images ----
-
-- name: Check to see if registry is reachable
-  command: curl -sf http://docker-registry:5000/
-  ignore_errors: yes
-  register: docker_registry_check
-  tags:
-    - skip_ansible_lint
-
-- name: Use registry if it is available
-  set_fact:
-     docker_registry: "{{ local_docker_registry }}"
-     docker_opts: "--insecure-registry {{ local_docker_registry }}"
-     docker_tag: "candidate"
-  when: docker_registry_check|succeeded
-
-- name: Pull docker images for XOS
-  become: yes
-  command: docker pull {{ docker_registry }}/{{ item }}:{{ docker_tag }}
-  with_items:
-    - xosproject/xos-base
-    - xosproject/xos-postgres
-    - xosproject/cord-app-build
-    - redis
-    - nginx
-    - node
-  tags:
-    - skip_ansible_lint
-
-- name: Tag the images downloaded from the local registry
-  command: docker tag {{ docker_registry }}/{{ item }}:{{ docker_tag }} {{ item }}:latest
-  with_items:
-    - xosproject/xos-base
-    - xosproject/xos-postgres
-    - xosproject/cord-app-build
-    - redis
-    - nginx
-  when: docker_registry_check|succeeded
-
-- name: Separately tag the node image with tag argon
-  command: docker tag {{ docker_registry }}/node:{{ docker_tag }} node:argon
-  when: docker_registry_check|succeeded
-
-
-
-
-
diff --git a/roles/xos-install/templates/nodes.yaml.j2 b/roles/xos-install/templates/nodes.yaml.j2
deleted file mode 100644
index 7ba953b..0000000
--- a/roles/xos-install/templates/nodes.yaml.j2
+++ /dev/null
@@ -1,31 +0,0 @@
-tosca_definitions_version: tosca_simple_yaml_1_0
-
-imports:
-   - custom_types/xos.yaml
-
-description: list of compute nodes, created by platform-install
-
-topology_template:
-  node_templates:
-
-# Site/Deployment, fully defined in deployment.yaml
-    {{ site_name }}:
-      type: tosca.nodes.Site
-
-    {{ deployment_type }}:
-      type: tosca.nodes.Deployment
-
-# compute nodes
-{% for node in groups["compute"] %}
-    {{ hostvars[node]['ansible_hostname'] }}:
-      type: tosca.nodes.Node
-      requirements:
-        - site:
-            node: {{ site_name }}
-            relationship: tosca.relationships.MemberOfSite
-        - deployment:
-            node: {{ deployment_type }}
-            relationship: tosca.relationships.MemberOfDeployment
-
-{% endfor %}
-
diff --git a/roles/xos-install/templates/vtn.yaml.j2 b/roles/xos-install/templates/vtn.yaml.j2
deleted file mode 100644
index f162609..0000000
--- a/roles/xos-install/templates/vtn.yaml.j2
+++ /dev/null
@@ -1,103 +0,0 @@
-tosca_definitions_version: tosca_simple_yaml_1_0
-
-imports:
-   - custom_types/xos.yaml
-
-description: autogenerated node tags file for VTN configuration
-
-topology_template:
-  node_templates:
-
-    service#ONOS_CORD:
-      type: tosca.nodes.ONOSService
-      requirements:
-      properties:
-          kind: onos
-          view_url: /admin/onos/onosservice/$id$/
-          no_container: true
-          rest_hostname: onos-cord
-          rest_port: 8182
-          replaces: service_ONOS_CORD
-
-    service#vtn:
-      type: tosca.nodes.VTNService
-      properties:
-          view_url: /admin/vtn/vtnservice/$id$/
-          privateGatewayMac: 00:00:00:00:00:01
-          localManagementIp: {{ management_network_ip }}
-          ovsdbPort: 6641
-          sshUser: root
-          sshKeyFile: /root/node_key
-          sshPort: 22
-          xosEndpoint: http://xos:8888/
-          xosUser: padmin@vicci.org
-          xosPassword: letmein
-          replaces: service_vtn
-          vtnAPIVersion: 2
-          controllerPort: onos-cord:6654
-
-{% for node in groups["compute"] %}
-{% if 'ipv4' in hostvars[node]['ansible_fabric'] %}
-
-    {{ hostvars[node]['ansible_hostname'] }}:
-      type: tosca.nodes.Node
-
-    # VTN bridgeId field for node {{ hostvars[node]['ansible_hostname'] }}
-    {{ hostvars[node]['ansible_hostname'] }}_bridgeId_tag:
-      type: tosca.nodes.Tag
-      properties:
-          name: bridgeId
-          value: of:0000{{ hostvars[node]['ansible_fabric']['macaddress'] | hwaddr('bare') }}
-      requirements:
-          - target:
-              node: {{ hostvars[node]['ansible_hostname'] }}
-              relationship: tosca.relationships.TagsObject
-          - service:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.MemberOfService
-
-    # VTN dataPlaneIntf field for node {{ hostvars[node]['ansible_hostname'] }}
-    {{ hostvars[node]['ansible_hostname'] }}_dataPlaneIntf_tag:
-      type: tosca.nodes.Tag
-      properties:
-          name: dataPlaneIntf
-          value: fabric
-      requirements:
-          - target:
-              node: {{ hostvars[node]['ansible_hostname'] }}
-              relationship: tosca.relationships.TagsObject
-          - service:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.MemberOfService
-
-    # VTN dataPlaneIp field for node {{ hostvars[node]['ansible_hostname'] }}
-    {{ hostvars[node]['ansible_hostname'] }}_dataPlaneIp_tag:
-      type: tosca.nodes.Tag
-      properties:
-          name: dataPlaneIp
-          value: {{ ( hostvars[node]['ansible_fabric']['ipv4']['address'] ~ '/' ~ hostvars[node]['ansible_fabric']['ipv4']['netmask'] ) | ipaddr('cidr') }}
-      requirements:
-          - target:
-              node: {{ hostvars[node]['ansible_hostname'] }}
-              relationship: tosca.relationships.TagsObject
-          - service:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.MemberOfService
-
-{% endif %}
-{% endfor %}
-
-    VTN_ONOS_app:
-      type: tosca.nodes.ONOSVTNApp
-      requirements:
-          - onos_tenant:
-              node: service#ONOS_CORD
-              relationship: tosca.relationships.TenantOfService
-          - vtn_service:
-              node: service#vtn
-              relationship: tosca.relationships.UsedByService
-      properties:
-          install_dependencies: http://mavenrepo:8080/repository/org/opencord/cord-config/{{ cord_app_version}}/cord-config-{{ cord_app_version }}.oar,http://mavenrepo:8080/repository/org/opencord/vtn/{{ cord_app_version }}/vtn-{{ cord_app_version }}.oar
-          dependencies: org.onosproject.drivers, org.onosproject.drivers.ovsdb, org.onosproject.openflow-base, org.onosproject.ovsdb-base, org.onosproject.dhcp
-          autogenerate: vtn-network-cfg
-
diff --git a/roles/xos-onboard-hosts/defaults/main.yml b/roles/xos-onboard-hosts/defaults/main.yml
new file mode 100644
index 0000000..ee4dbb9
--- /dev/null
+++ b/roles/xos-onboard-hosts/defaults/main.yml
@@ -0,0 +1,7 @@
+---
+# xos-onboard-hosts/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+
+cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}"
+
diff --git a/roles/xos-onboard-hosts/tasks/main.yml b/roles/xos-onboard-hosts/tasks/main.yml
new file mode 100644
index 0000000..5adf7d6
--- /dev/null
+++ b/roles/xos-onboard-hosts/tasks/main.yml
@@ -0,0 +1,19 @@
+---
+# xos-onboard-hosts/tasks/main.yml
+
+- name: Get the Docker container names for onboarded containers
+  docker_service:
+    project_name: "{{ cord_profile | regex_replace('\\W','') }}"
+    project_src: "{{ cord_profile_dir }}/onboarding-docker-compose/"
+    recreate: never
+  register: xos_onboard_out
+
+- name: Add the containers to Ansible groups on a per-container type basis
+  add_host:
+    name: "{{ xos_onboard_out.ansible_facts[item].keys() | first }}"
+    groups: "{{ item }}"
+    ansible_connection: "docker"
+    cord_profile: "{{ cord_profile }}"
+    ansible_ssh_user: "root"
+  with_items: "{{ xos_onboard_out.ansible_facts.keys() | list }}"
+
diff --git a/roles/xos-onboarding/defaults/main.yml b/roles/xos-onboarding/defaults/main.yml
new file mode 100644
index 0000000..b9d946f
--- /dev/null
+++ b/roles/xos-onboarding/defaults/main.yml
@@ -0,0 +1,12 @@
+---
+# xos-service-onboard/defaults/main.yml
+
+cord_dir: "{{ ansible_user_dir + '/cord' }}"
+
+xos_bootstrap_ui_port: 9001
+
+xos_libraries:
+  - "ng-xos-lib"
+
+xos_services: []
+
diff --git a/roles/xos-onboarding/tasks/main.yml b/roles/xos-onboarding/tasks/main.yml
new file mode 100644
index 0000000..df3a205
--- /dev/null
+++ b/roles/xos-onboarding/tasks/main.yml
@@ -0,0 +1,86 @@
+---
+# xos-onboarding/tasks/main.yml
+
+- name: Wait for XOS to be ready
+  wait_for:
+    host: localhost
+    port: "{{ xos_bootstrap_ui_port }}"
+    timeout: 120
+
+- name: Bootstrap XOS database - create site, deployment, admin user
+  command: "python /opt/xos/tosca/run.py none /opt/cord_profile/{{ item }}"
+  with_items:
+    - "fixtures.yaml"
+    - "deployment.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Configure XOS with xos.yaml TOSCA
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/xos.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Wait for XOS to be onboarded
+  uri:
+    url: "http://localhost:{{ xos_bootstrap_ui_port }}/api/utility/onboarding/xos/ready/"
+    method: GET
+    return_content: yes
+  register: xos_onboard_status
+  until: '"true" in xos_onboard_status.content'
+  retries: 120
+  delay: 2
+
+- name: Disable onboarding
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/disable-onboarding.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Onboard libraries
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/xos_libraries/{{ item }}/{{ item }}-onboard.yaml"
+  with_items: "{{ xos_libraries }}"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Onboard services
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/xos_services/{{ item.path | basename }}/xos/{{ item.name }}-onboard.yaml"
+  with_items: "{{ xos_services }}"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Enable onboarding
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/enable-onboarding.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+- name: Wait for libraries to be onboarded
+  uri:
+    url: "http://localhost:{{ xos_bootstrap_ui_port }}/api/utility/onboarding/services/{{ item }}/ready/"
+    method: GET
+    return_content: yes
+  register: xos_onboard_status
+  until: '"true" in xos_onboard_status.content'
+  retries: 60
+  delay: 5
+  with_items: "{{ xos_libraries }}"
+
+- name: Wait for services to be onboarded
+  uri:
+    url: "http://localhost:{{ xos_bootstrap_ui_port }}/api/utility/onboarding/services/{{ item.name }}/ready/"
+    method: GET
+    return_content: yes
+  register: xos_onboard_status
+  until: '"true" in xos_onboard_status.content'
+  retries: 60
+  delay: 5
+  with_items: "{{ xos_services }}"
+
+- name: Wait for XOS to be onboarded after service onboarding
+  uri:
+    url: "http://localhost:{{ xos_bootstrap_ui_port }}/api/utility/onboarding/xos/ready/"
+    method: GET
+    return_content: yes
+  register: xos_onboard_status
+  until: '"true" in xos_onboard_status.content'
+  retries: 60
+  delay: 5
+
diff --git a/roles/xos-ready/defaults/main.yml b/roles/xos-ready/defaults/main.yml
new file mode 100644
index 0000000..4dc61e6
--- /dev/null
+++ b/roles/xos-ready/defaults/main.yml
@@ -0,0 +1,4 @@
+---
+# xos-ready/defaults/main.yml
+
+xos_ui_port: 9000
diff --git a/roles/xos-ready/tasks/main.yml b/roles/xos-ready/tasks/main.yml
new file mode 100644
index 0000000..ffdb98c
--- /dev/null
+++ b/roles/xos-ready/tasks/main.yml
@@ -0,0 +1,9 @@
+---
+# xos-ready/tasks/main.yml
+
+- name: Wait for XOS to be ready after service onboarding
+  wait_for:
+    host: localhost
+    port: "{{ xos_ui_port }}"
+    timeout: 60
+
diff --git a/roles/xos-test-restore-db/tasks/main.yml b/roles/xos-test-restore-db/tasks/main.yml
new file mode 100644
index 0000000..05d2518
--- /dev/null
+++ b/roles/xos-test-restore-db/tasks/main.yml
@@ -0,0 +1,28 @@
+---
+# xos-test-restore-db/tasks/main.yml
+
+- name: Restore core initial data from fixture
+  command: python /opt/xos/manage.py --noobserver loaddata /opt/xos/core/fixtures/core_initial_data.json
+  tags:
+    - skip_ansible_lint # testing only
+
+
+- name: Start loading XOS config
+  command: "python /opt/xos/tosca/run.py none /opt/cord_profile/{{ item }}"
+  with_items:
+    - "fixtures.yaml"
+    - "deployment.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
+
+- name: Continue loading XOS config (as admin user)
+  command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/{{ item }}"
+  with_items:
+    - "sample.yaml"
+    - "management-net.yaml"
+    - "services.yaml"
+    - "volt-devices.yaml"
+  tags:
+    - skip_ansible_lint # TOSCA loading should be idempotent
+
diff --git a/scripts/bootstrap_ansible.sh b/scripts/bootstrap_ansible.sh
deleted file mode 100755
index 5dce4b2..0000000
--- a/scripts/bootstrap_ansible.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2012 the original author or authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-set -e
-
-echo "Installing Ansible..."
-apt-get install -y software-properties-common
-apt-add-repository ppa:ansible/ansible
-apt-get update
-apt-get install -y ansible
-cp /platform-install/ansible.cfg /etc/ansible/ansible.cfg
-
diff --git a/scripts/cord-bootstrap.sh b/scripts/cord-bootstrap.sh
new file mode 100644
index 0000000..760c304
--- /dev/null
+++ b/scripts/cord-bootstrap.sh
@@ -0,0 +1,128 @@
+#!/usr/bin/env bash
+# cord-bootstrap.sh
+# Bootstraps environment and downloads CORD repos
+
+set -e
+set -x
+
+CORDDIR=~/cord
+
+function bootstrap() {
+
+  if [ ! -x "/usr/bin/ansible" ]
+  then
+    echo "Installing Ansible..."
+    sudo apt-get update
+    sudo apt-get install -y software-properties-common
+    sudo apt-add-repository -y ppa:ansible/ansible
+    sudo apt-get update
+    sudo apt-get install -y ansible python-netaddr
+  fi
+
+  if [ ! -x "/usr/local/bin/repo" ]
+  then
+    echo "Installing repo..."
+    REPO_SHA256SUM="e147f0392686c40cfd7d5e6f332c6ee74c4eab4d24e2694b3b0a0c037bf51dc5"
+    curl -o /tmp/repo https://storage.googleapis.com/git-repo-downloads/repo
+    echo "$REPO_SHA256SUM  /tmp/repo" | sha256sum -c -
+    sudo mv /tmp/repo /usr/local/bin/repo
+    sudo chmod a+x /usr/local/bin/repo
+  fi
+
+  if [ ! -d "$CORDDIR" ]
+  then
+    echo "Downloading CORD/XOS..."
+
+    if [ ! -e "~/.gitconfig" ]
+    then
+      echo "No ~/.gitconfig, setting testing defaults"
+      git config --global user.name 'Test User'
+      git config --global user.email 'test@null.com'
+      git config --global color.ui false
+    fi
+
+    mkdir $CORDDIR && cd $CORDDIR
+    repo init -u https://gerrit.opencord.org/manifest -b master -g build,onos,orchestration
+    repo sync
+
+    # check out gerrit branches using repo
+    for gerrit_branch in ${GERRIT_BRANCHES[@]}; do
+      echo "checking out opencord gerrit branch: $gerrit_branch"
+      repo download ${gerrit_branch/:/ }
+    done
+  fi
+
+  if [ ! -x "/usr/bin/docker" ]
+  then
+    echo "Installing Devel Tools..."
+    cd ${CORDDIR}/build/platform-install
+    ansible-playbook -i inventory/localhost devel-tools-playbook.yml
+  fi
+
+  set +x
+  echo "*******************************************************************************"
+  echo "*  IMPORTANT: Logout and login so your account is added to the docker group!  *"
+  echo "*   Then 'cd ${CORDDIR}/build/platform-install' and start your CORD profile.  *"
+  echo "*        Need help?  Check out the wiki at: https://wiki.opencord.org/        *"
+  echo "*******************************************************************************"
+
+}
+
+function cleanup() {
+  if [ ! -x "/usr/bin/ansible" ]
+  then
+    echo "Ansible not installed, can't cleanup. Is this the initial run?"
+  else
+    echo "Cleaning up - destroying docker containers..."
+    cd ${CORDDIR}/build/platform-install
+    ansible-playbook -i inventory/localhost teardown-playbook.yaml
+  fi
+}
+
+function cord_profile() {
+  echo "Running a profile is broken due to docker group membership issue"
+}
+
+# options that may be set by getopt
+GERRIT_BRANCHES=
+CLEANUP=0
+CORD_PROFILE=""
+
+while getopts "b:hcp:" opt; do
+  case ${opt} in
+    b ) GERRIT_BRANCHES+=("$OPTARG")
+      ;;
+    c ) CLEANUP=1
+      ;;
+    h ) echo "Usage:"
+      echo "    $0                prep system to run a CORD profile"
+      echo "    $0 -b <project:changeset/revision>  checkout a changesets from gerrit. Can"
+      echo "                      be used multiple times."
+      echo "    $0 -c             cleanup from previous test"
+      echo "    $0 -p <profile>   prep then start running the specified profile"
+      echo "    $0 -h             display this help message"
+      exit 0
+      ;;
+    p ) CORD_PROFILE=$OPTARG
+      ;;
+    \? ) echo "Invalid option: -$OPTARG"
+      exit 1
+      ;;
+  esac
+done
+
+# "main" function
+if [[ $CLEANUP -eq 1 ]]
+then
+  cleanup
+fi
+
+bootstrap
+
+if [[ $CORD_PROFILE -ne "" ]]
+then
+  set -x
+  cord_profile
+fi
+
+exit 0
diff --git a/teardown-playbook.yml b/teardown-playbook.yml
new file mode 100644
index 0000000..17c44ed
--- /dev/null
+++ b/teardown-playbook.yml
@@ -0,0 +1,16 @@
+---
+# teardown-playbook.yml
+
+- name: Include vars
+  hosts: all
+  tasks:
+    - name: Include variables
+      include_vars: "{{ item }}"
+      with_items:
+        - "profile_manifests/{{ cord_profile }}.yml"
+        - profile_manifests/local_vars.yml
+
+- name: Teardown CORD profile
+  hosts: head
+  roles:
+   - teardown-profile
diff --git a/templates/cord.yaml b/templates/cord.yml
similarity index 100%
rename from templates/cord.yaml
rename to templates/cord.yml
diff --git a/vars/aztest.yml b/vars/aztest.yml
deleted file mode 100644
index 8a2ea6c..0000000
--- a/vars/aztest.yml
+++ /dev/null
@@ -1,52 +0,0 @@
----
-# file: group_vars/aztest.yml
-
-# site configuration
-site_name: aztest
-site_humanname: "Arizona Test Site"
-
-deployment_type: campus
-
-xos_users:
-  - email: padmin@vicci.org
-    password: letmein
-    first: PAdmin
-    last: VicciOrg
-    admin: true
-
-# IP prefix for VMs
-virt_nets:
-  - name: mgmtbr
-    ipv4_prefix: 192.168.250
-    head_vms: true
-
-# DNS/domain settings
-site_suffix: aztest.infra.opencloud.us
-
-dns_search:
-  - aztest.infra.opencloud.us
-  - opencloud.cs.arizona.edu
-
-nsd_zones:
-  - name: aztest.infra.opencloud.us
-    ipv4_first_octets: 192.168.250
-    name_reverse_unbound: "168.192.in-addr.arpa"
-    soa: ns1
-    ns:
-      - { name: ns1 }
-    nodelist: head_vm_list
-    aliases:
-      - { name: "ns1" , dest: "head" }
-      - { name: "ns" , dest: "head" }
-      - { name: "apt-cache" , dest: "head" }
-
-name_on_public_interface: head
-
-# If true, unbound listens on the head node's `ansible_default_ipv4` interface
-unbound_listen_on_default: True
-
-# VTN network configuration
-management_network_cidr: 172.27.0.0/24
-management_network_ip: 172.27.0.1/24
-data_plane_ip: 10.168.0.253/24
-
diff --git a/vars/cord.yml b/vars/cord.yml
deleted file mode 100644
index 50ed1bc..0000000
--- a/vars/cord.yml
+++ /dev/null
@@ -1,70 +0,0 @@
----
-# file: group_vars/cord.yml
-
-# site configuration
-site_name: mysite
-site_humanname: MySite
-
-deployment_type: MyDeployment
-
-xos_users:
-  - email: padmin@vicci.org
-    password: letmein
-    first: PAdmin
-    last: VicciOrg
-    admin: true
-
-# VM networks/bridges on head
-virt_nets:
-  - name: mgmtbr
-    ipv4_prefix: 192.168.122
-    head_vms: true
-
-# site domain suffix
-site_suffix: cord.lab
-
-# SSL server certificate generation
-server_certs:
-  - cn: "keystone.{{ site_suffix }}"
-    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=keystone.{{ site_suffix }}"
-    altnames:
-      - "DNS:keystone.{{ site_suffix }}"
-      - "DNS:{{ site_suffix }}"
-  - cn: "xos-core.{{ site_suffix }}"
-    subj: "/C=US/ST=California/L=Menlo Park/O=ON.Lab/OU=Test Deployment/CN=xos-core.{{ site_suffix }}"
-    altnames:
-      - "DNS:xos-core.{{ site_suffix }}"
-
-# resolv.conf settings
-dns_search:
-  - "{{ site_suffix }}"
-
-# NSD/Unbound settings
-nsd_zones:
-  - name: "{{ site_suffix }}"
-    ipv4_first_octets: 192.168.122
-    name_reverse_unbound: "168.192.in-addr.arpa"
-    soa: ns1
-    ns:
-      - { name: ns1 }
-    nodelist: head_vm_list
-    aliases:
-      - { name: "ns1" , dest: "head" }
-      - { name: "ns" , dest: "head" }
-      - { name: "apt-cache" , dest: "head" }
-
-name_on_public_interface: head
-
-# If true, unbound listens on the head node's `ansible_default_ipv4` interface
-unbound_listen_on_default: True
-
-# turn this on, or override it when running the playbook with --extra-vars="on_cloudlab=True"
-on_cloudlab: False
-
-# VTN network configuration
-management_network_cidr: 172.27.0.0/24
-management_network_ip: 172.27.0.1/24
-data_plane_ip: 10.168.0.253/24
-
-# CORD ONOS app version
-cord_app_version: 1.2-SNAPSHOT
diff --git a/vars/cord_defaults.yml b/vars/cord_defaults.yml
deleted file mode 100644
index ca21591..0000000
--- a/vars/cord_defaults.yml
+++ /dev/null
@@ -1,242 +0,0 @@
----
-# vars/cord_defaults.yml
-
-# turn this off, or override when running playbook with --extra-vars="on_maas=False"
-on_maas: true
-
-run_dist_upgrade: false
-
-maas_node_key: /etc/maas/ansible/id_rsa
-
-openstack_version: kilo
-
-juju_config_name: cord
-
-juju_config_path: /usr/local/src/juju_config.yml
-
-service_profile_repo_dest: "{{ ansible_user_dir }}/service-profile"
-
-xos_configuration: cord-pod
-
-# Pull ONOS from local Docker registry rather than Docker Hub
-onos_docker_image: "docker-registry:5000/opencord/onos:candidate"
-
-xos_config_targets:
-  - local_containers
-  - xos
-  - vtn
-  - fabric
-  - cord
-  - vrouter
-
-xos_tosca_templates:
-  - cord-services.yaml
-  - cord-test-subscriber.yaml
-  - deployment.yaml
-  - exampleservice.yaml
-  - fabric.yaml
-  - management-net.yaml
-  - nodes.yaml
-  - openstack.yaml
-  - public-net.yaml
-  - vtn.yaml
-
-deployment_flavors:
-  - m1.small
-  - m1.medium
-  - m1.large
-  - m1.xlarge
-
-xos_container_rebuild: False
-
-apt_cacher_name: apt-cache
-
-apt_ssl_sites:
-  - apt.dockerproject.org
-  - butler.opencloud.cs.arizona.edu
-  - deb.nodesource.com
-
-charm_versions:
-  ceilometer: "cs:trusty/ceilometer-17"
-  ceilometer-agent: "cs:trusty/ceilometer-agent-13"
-  glance: "cs:trusty/glance-28"
-  keystone: "cs:trusty/keystone-33"
-  mongodb: "cs:trusty/mongodb-33"
-  percona-cluster: "cs:trusty/percona-cluster-31"
-  nagios: "cs:trusty/nagios-10"
-  neutron-api: "cs:~cordteam/trusty/neutron-api-4"
-  nova-cloud-controller: "cs:trusty/nova-cloud-controller-64"
-  nova-compute: "cs:~cordteam/trusty/nova-compute-2"
-  nrpe: "cs:trusty/nrpe-4"
-  ntp: "cs:trusty/ntp-14"
-  openstack-dashboard: "cs:trusty/openstack-dashboard-19"
-  rabbitmq-server: "cs:trusty/rabbitmq-server-42"
-
-head_vm_list: []
-
-head_lxd_list:
-  - name: "juju-1"
-    service: "juju"
-    aliases:
-      - "juju"
-    ipv4_last_octet: 10
-
-  - name: "ceilometer-1"
-    service: "ceilometer"
-    aliases:
-      - "ceilometer"
-    ipv4_last_octet: 20
-    forwarded_ports:
-      - { ext: 8777, int: 8777 }
-
-  - name: "glance-1"
-    service: "glance"
-    aliases:
-      - "glance"
-    ipv4_last_octet: 30
-    forwarded_ports:
-      - { ext: 9292, int: 9292 }
-
-  - name: "keystone-1"
-    service: "keystone"
-    aliases:
-      - "keystone"
-    ipv4_last_octet: 40
-    forwarded_ports:
-      - { ext: 35357, int: 35357 }
-      - { ext: 4990, int: 4990 }
-      - { ext: 5000, int: 5000 }
-
-  - name: "percona-cluster-1"
-    service: "percona-cluster"
-    aliases:
-      - "percona-cluster"
-    ipv4_last_octet: 50
-
-  - name: "nagios-1"
-    service: "nagios"
-    aliases:
-      - "nagios"
-    ipv4_last_octet: 60
-    forwarded_ports:
-      - { ext: 3128, int: 80 }
-
-  - name: "neutron-api-1"
-    service: "neutron-api"
-    aliases:
-      - "neutron-api"
-    ipv4_last_octet: 70
-    forwarded_ports:
-      - { ext: 9696, int: 9696 }
-
-  - name: "nova-cloud-controller-1"
-    service: "nova-cloud-controller"
-    aliases:
-      - "nova-cloud-controller"
-    ipv4_last_octet: 80
-    forwarded_ports:
-      - { ext: 8774, int: 8774 }
-
-  - name: "openstack-dashboard-1"
-    service: "openstack-dashboard"
-    aliases:
-      - "openstack-dashboard"
-    ipv4_last_octet: 90
-    forwarded_ports:
-      - { ext: 8080, int: 80 }
-
-  - name: "rabbitmq-server-1"
-    service: "rabbitmq-server"
-    aliases:
-      - "rabbitmq-server"
-    ipv4_last_octet: 100
-
-  - name: "mongodb-1"
-    service: "mongodb"
-    aliases:
-      - "mongodb"
-    ipv4_last_octet: 110
-
-lxd_service_list:
-  - ceilometer
-  - glance
-  - keystone
-  - percona-cluster
-  - nagios
-  - neutron-api
-  - nova-cloud-controller
-  - openstack-dashboard
-  - rabbitmq-server
-  - mongodb
-
-standalone_service_list:
-  - ntp
-  - nrpe
-  - ceilometer-agent
-
-
-service_relations:
-  - name: keystone
-    relations: [ "percona-cluster", "nrpe", ]
-
-  - name: nova-cloud-controller
-    relations: [ "percona-cluster", "rabbitmq-server", "glance", "keystone", "nrpe", ]
-
-  - name: glance
-    relations: [ "percona-cluster", "keystone", "nrpe", ]
-
-  - name: neutron-api
-    relations: [ "keystone",  "percona-cluster", "rabbitmq-server", "nova-cloud-controller", "nrpe", ]
-
-  - name: openstack-dashboard
-    relations: [ "keystone", "nrpe", ]
-
-  - name: nagios
-    relations: [ "nrpe", ]
-
-  - name: "percona-cluster:juju-info"
-    relations: [ "nrpe:general-info", ]
-
-  - name: rabbitmq-server
-    relations: [ "nrpe", ]
-
-  - name: ceilometer
-    relations: [ "mongodb", "rabbitmq-server", "nagios", "nrpe", ]
-
-  - name: "ceilometer:identity-service"
-    relations: [ "keystone:identity-service", ]
-
-  - name: "ceilometer:ceilometer-service"
-    relations: [ "ceilometer-agent:ceilometer-service", ]
-
-
-compute_relations:
-  - name: nova-compute
-    relations: [ "ceilometer-agent", "glance", "nova-cloud-controller", "nagios", "nrpe", ]
-
-  - name: "nova-compute:shared-db"
-    relations: [ "percona-cluster:shared-db", ]
-
-  - name: "nova-compute:amqp"
-    relations: [ "rabbitmq-server:amqp", ]
-
-  - name: ntp
-    relations: [ "nova-compute", ]
-
-
-xos_images:
-  - name: "trusty-server-multi-nic"
-    url: "http://www.vicci.org/opencloud/trusty-server-cloudimg-amd64-disk1.img.20170201"
-    checksum: "sha256:ebf007ba3ec1043b7cd011fc6668e2a1d1d4c69c41071e8513ab355df7a057cb"
-
-  - name: "vsg-1.1"
-    url: "http://www.vicci.org/cord/vsg-1.1.img"
-    checksum: "sha256:16b0beb6778aed0f5feecb05f8d5750e6c262f98e6011e99ddadf7d46a177b6f"
-
-  - name: "ceilometer-trusty-server-multi-nic"
-    url: "http://www.vicci.org/cord/ceilometer-trusty-server-multi-nic.compressed.qcow2"
-    checksum: "sha256:b77ef8d692b640568dea13df99fe1dfcb1f4bb4ac05408db9ff77399b34f754f"
-
-  - name: "ceilometer-service-trusty-server-multi-nic"
-    url: "http://www.vicci.org/cord/ceilometer-service-trusty-server-multi-nic.compressed.qcow2.20170131"
-    checksum: "sha256:f0341e283f0f2cb8f70cd1a6347e0081c9c8492ef34eb6397c657ef824800d4f"
diff --git a/vars/example_keystone.yml b/vars/example_keystone.yml
deleted file mode 100644
index 14df06f..0000000
--- a/vars/example_keystone.yml
+++ /dev/null
@@ -1,4 +0,0 @@
----
-
-keystone_admin_password: "VeryLongKeystoneAdminPassword"
-
diff --git a/vars/opencloud_defaults.yml b/vars/opencloud_defaults.yml
deleted file mode 100644
index bc16f70..0000000
--- a/vars/opencloud_defaults.yml
+++ /dev/null
@@ -1,240 +0,0 @@
----
-# vars/opencloud_defaults.yml
-
-on_maas: false
-
-run_dist_upgrade: true
-
-openstack_version: kilo
-
-juju_config_name: opencloud
-
-xos_configuration: opencloud
-
-xos_config_targets:
-  - local_containers
-  - xos
-  - opencloud
-
-xos_tosca_templates:
-  - deployment.yaml
-  - exampleservice.yaml
-  - management-net.yaml
-  - nodes.yaml
-  - openstack.yaml
-  - public-net.yaml
-  - vtn.yaml
-
-deployment_flavors:
-  - m1.small
-  - m1.medium
-  - m1.large
-  - m1.xlarge
-
-apt_cacher_name: apt-cache
-
-apt_ssl_sites:
-  - apt.dockerproject.org
-  - butler.opencloud.cs.arizona.edu
-  - deb.nodesource.com
-
-charm_versions:
-  neutron-api: "cs:~cordteam/trusty/neutron-api-3"
-  nova-compute: "cs:~cordteam/trusty/nova-compute-2"
-
-head_vm_list:
-  - name: "juju-1"
-    service: "juju"
-    aliases:
-       - "juju"
-    ipv4_last_octet: 10
-    cpu: 1
-    memMB: 2048
-    diskGB: 20
-
-  - name: "ceilometer-1"
-    service: "ceilometer"
-    aliases:
-      - "ceilometer"
-    ipv4_last_octet: 20
-    cpu: 1
-    memMB: 2048
-    diskGB: 20
-    forwarded_ports:
-      - { ext: 8777, int: 8777 }
-
-  - name: "glance-1"
-    service: "glance"
-    aliases:
-      - "glance"
-    ipv4_last_octet: 30
-    cpu: 2
-    memMB: 4096
-    diskGB: 160
-    forwarded_ports:
-      - { ext: 9292, int: 9292 }
-
-  - name: "keystone-1"
-    service: "keystone"
-    aliases:
-      - "keystone"
-    ipv4_last_octet: 40
-    cpu: 2
-    memMB: 4096
-    diskGB: 40
-    forwarded_ports:
-      - { ext: 35357, int: 35357 }
-      - { ext: 4990, int: 4990 }
-      - { ext: 5000, int: 5000 }
-
-  - name: "percona-cluster-1"
-    service: "percona-cluster"
-    aliases:
-      - "percona-cluster"
-    ipv4_last_octet: 50
-    cpu: 2
-    memMB: 4096
-    diskGB: 40
-
-  - name: "nagios-1"
-    service: "nagios"
-    aliases:
-      - "nagios"
-    ipv4_last_octet: 60
-    cpu: 1
-    memMB: 2048
-    diskGB: 20
-    forwarded_ports:
-      - { ext: 3128, int: 80 }
-
-  - name: "neutron-api-1"
-    service: "neutron-api"
-    aliases:
-      - "neutron-api"
-    ipv4_last_octet: 70
-    cpu: 2
-    memMB: 4096
-    diskGB: 40
-    forwarded_ports:
-      - { ext: 9696, int: 9696 }
-
-
-  - name: "nova-cloud-controller-1"
-    service: "nova-cloud-controller"
-    aliases:
-      - "nova-cloud-controller"
-    ipv4_last_octet: 90
-    cpu: 2
-    memMB: 4096
-    diskGB: 40
-    forwarded_ports:
-      - { ext: 8774, int: 8774 }
-
-  - name: "openstack-dashboard-1"
-    service: "openstack-dashboard"
-    aliases:
-      - "openstack-dashboard"
-    ipv4_last_octet: 100
-    cpu: 1
-    memMB: 2048
-    diskGB: 20
-    forwarded_ports:
-      - { ext: 8080, int: 80 }
-
-  - name: "rabbitmq-server-1"
-    service: "rabbitmq-server"
-    aliases:
-      - "rabbitmq-server"
-    ipv4_last_octet: 110
-    cpu: 2
-    memMB: 4096
-    diskGB: 40
-
-  - name: "onos-cord-1"
-    aliases:
-      - "onos-cord"
-    ipv4_last_octet: 110
-    cpu: 2
-    memMB: 4096
-    diskGB: 40
-    docker_path: "cord"
-
-  - name: "xos-1"
-    aliases:
-      - "xos"
-    ipv4_last_octet: 130
-    cpu: 2
-    memMB: 4096
-    diskGB: 40
-    docker_path: 'service-profile/opencloud'
-
-vm_service_list:
-  - ceilometer
-  - glance
-  - keystone
-  - nagios
-  - neutron-api
-  - nova-cloud-controller
-  - openstack-dashboard
-  - percona-cluster
-  - rabbitmq-server
-
-standalone_service_list:
-  - ceilometer-agent
-  - nrpe
-  - ntp
-
-service_relations:
-  - name: keystone
-    relations: [ "percona-cluster", "nrpe", ]
-
-  - name: nova-cloud-controller
-    relations: [ "percona-cluster", "rabbitmq-server", "glance", "keystone", "nrpe", ]
-
-  - name: glance
-    relations: [ "percona-cluster", "keystone", "nrpe", ]
-
-  - name: neutron-api
-    relations: [ "keystone", "percona-cluster", "rabbitmq-server", "nova-cloud-controller", "nrpe", ]
-
-  - name: openstack-dashboard
-    relations: [ "keystone", "nrpe", ]
-
-  - name: nagios
-    relations: [ "nrpe", ]
-
-  - name: "percona-cluster:juju-info"
-    relations: [ "nrpe:general-info", ]
-
-  - name: rabbitmq-server
-    relations: [ "nrpe", ]
-
-  - name: ceilometer
-    relations: [ "mongodb", "rabbitmq-server", "nagios", "nrpe", ]
-
-  - name: "ceilometer:identity-service"
-    relations: [ "keystone:identity-service", ]
-
-  - name: "ceilometer:ceilometer-service"
-    relations: [ "ceilometer-agent:ceilometer-service", ]
-
-
-compute_relations:
-  - name: nova-compute
-    relations: [ "ceilometer-agent", "glance", "nova-cloud-controller", "nagios", "nrpe", ]
-
-  - name: "nova-compute:shared-db"
-    relations: [ "percona-cluster:shared-db", ]
-
-  - name: "nova-compute:amqp"
-    relations: [ "rabbitmq-server:amqp", ]
-
-  - name: ntp
-    relations: [ "nova-compute", ]
-
-
-xos_images:
-  - name: "trusty-server-multi-nic"
-    url: "http://www.vicci.org/opencloud/trusty-server-cloudimg-amd64-disk1.img"
-    checksum: "sha256:c2d0ffc937aeb96016164881052a496658efeb98959dc68e73d9895c5d9920f7"
-
diff --git a/xos-reinstall-playbook.yml b/xos-reinstall-playbook.yml
deleted file mode 100644
index b204d69..0000000
--- a/xos-reinstall-playbook.yml
+++ /dev/null
@@ -1,28 +0,0 @@
----
-# Runs "make cleanup", deletes XOS,  reinstalls, and restarts XOS
-
-- name: Include vars
-  hosts: head
-  tasks:
-    - name: Include variables
-      include_vars: "{{ item }}"
-      with_items:
-        - vars/cord_defaults.yml
-        - vars/cord.yml
-        - vars/example_keystone.yml
-
-- name: Reinstall XOS
-  hosts: head
-  roles:
-    - xos-uninstall
-    - xos-install
-    - xos-config
-    - xos-head-start
-
-- name: Reprovision compute nodes
-  hosts: head
-  tasks:
-   - name: Delete maas inventory
-     command: cord prov delete -a
-     tags:
-       - skip_ansible_lint