updated documentation

Change-Id: I386b1fcc3b9cf79966a8c96a66c8ea08d881375e
(cherry picked from commit fdf4adb8bd265d810d8d1f30eacb93d9d65438cb)
diff --git a/API.md b/API.md
new file mode 100644
index 0000000..0f4f2ec
--- /dev/null
+++ b/API.md
@@ -0,0 +1,360 @@
+# API Documentation
+There are a number of services that provide automation services for CORD. These
+services are written in the theme of Docker based micro(µ) services. Each runs
+independently in its own container and they consume the other µ services via
+REST API calls as well as provide a REST API through which other services can
+utilize their functions.
+
+The current list of µ services are:
+   * **automation** automates bare metal compute hosts through the MAAS
+    deployment life cycle and invokes CORD specific provisioning
+   * **provisioner** applies CORD specific provisioning to compute nodes and
+    fabric switches
+   * **switchq** identifies supported fabric switches and invokes CORD specific
+    provisioning
+   * **allocator** allocates IP addresses from a specified range to be used
+    for interfaces attached to the leaf - spine fabric
+   * **harvester** augments MAAS's DDNS capability to support all devices that
+    request a DHCP address
+   * **config-generator** generates a configuration file for the ONOS leaf -
+    spine (segment routing) fabric
+
+ ![](doc/images/uservices.png)
+
+## Automation
+**Docker image:** cord-maas-automation
+
+### Configuration
+|Environment Variable|Default|Description|
+|-|-|-|
+|AUTOMATION_POWER_HELPER_USER|"cord"|User ID to use when attempting to execute vboxmanage on the host machine|
+|AUTOMATION_POWER_HELPER_HOST|"127.0.0.1"|IP address of the host on which to execute vboxmanage commands|
+|AUTOMATION_POWER_HELPER_SCRIPT|""|Script to execute to help manage power for VirtualBox nodes in MAAS|
+|AUTOMATION_PROVISION_URL|""|URL on which to contact the provision services|
+|AUTOMATION_PROVISION_TTL|"1h"|Amount of time to wait for a provisioning to complete before considering it failed|
+|AUTOMATION_LOG_LEVEL|"warning"|Level of logging messages to display|
+|AUTOMATION_LOG_FORMAT|text"|Format of the log messages|
+
+|Command Line Flag|Default|Description|
+|-|-|-
+|-apikey|""|key with which to access MAAS server|
+|-maas|"http://localhost/MAAS"|url over which to access MAAS|
+|-apiVersion|"1.0"|version of the API to access|
+|-queryPeriod|"15s"|frequency the MAAS service is polled for node states|
+|-preview|false|displays the action that would be taken, but does not do the action, in this mode the nodes are processed only once|
+|-mappings|"{}"|the mac to name mappings|
+|-always-rename|true|attempt to rename at every stage of workflow|
+|-filter|'{"hosts":{"include":[".*"],"exclude":[]},"zones":{"include": ["default"],"exclude":[]}}'|constrain by hostname what will be automated|
+
+### REST Resources
+None
+
+## Provisioner
+**Docker image:** cord-provisioner
+
+### Configuration
+|Environment Variable|Default|Description|
+|-|-|-|
+|PROVISION_PORT|"4243"|Port on which to listen for REST requests|
+|PROVISION_LISTEN|"0.0.0.0"|IP address on which to listen for REST requests|
+|PROVISION_ROLE_SELECTOR_URL|""|URL of a service that can be queried to determine the role that should be used for a given node, else the default is used|
+|PROVISION_DEFAULT_ROLE|"compute-node"|the default role to be used if no selection URL is specified|
+|PROVISION_SCRIPT|"do-ansible"|script to execute for a provisioning event|
+|PROVISION_STORAGE_URL|"memory:"|URL to use for storage of provisioning state information|
+|PROVISION_LOG_LEVEL|"warning"|Level of logging messages to display|
+|PROVISION_LOG_FORMAT|text"|Format of the log messages|
+
+### REST Resources
+|URI|Operation|Description|
+|-|-|-|
+|/provision/|POST|create a new provisioning request|
+|/provision/|GET|get a list of all provisioning requests and their state|
+|/provision/{id}|GET|get a single provisioning request and state|
+|/provision/{id}|DELETE|delete a provisioning request|
+
+##### POST /provision/
+`POST`s to this URL will initiate a new provisioning request. This requests
+requires that a provisioning request object is sent as data to the request.
+
+This request returns a `201 Accepted` response, if the request was successfully
+queued for provisioning.
+
+The request is a `JSON` objects with the following members:
+
+|Name|Type|Required|Description|
+|-|-|-|-|
+|id|string|yes|unique ID to use for the request|
+|name|string|yes|human readable name for the node being provisioned|
+|ip|string|yes|IP address of the node being provisioned|
+|mac|string|no|MAC address associated with the node being provisioned|
+|role_selector|string|no|URL for a per request role selector service|
+|role|string|no|role to provision for this request, if no selector specified|
+|script|string|no|script to execute for this provisioning request|
+
+Example:
+```
+{
+    "id": "node-fe30a9c4-4a30-11e6-b7a3-002590fa5f58"
+    "name": "lively-road.cord.lab",
+    "ip": "10.2.0.16",
+    "mac": "00:25:90:fa:5f:4f",
+    "role_selector": "",
+    "role": "",
+    "script": "",
+}
+```
+##### GET /provision/
+Fetches the list of all provisioning requests and their state information. The
+result is a JSON array such as the example below:
+
+|Name|Type|Description|
+|-|-|-|
+|timestamp|number|time that the request was made|
+|message|string|error message if the request failed|
+|status|number|the status of the request, 0=pending,1=provisioning,2=complete,3=failed|
+|worker|number|internal identifier of the worker that executed the provisioning request|
+|request.Role|string|actual role used for the request|
+|request.Script|string|actual script used for the request|
+|request.Info|object|the original request made to the provisioner|
+
+```
+[
+  {
+    "timestamp": 1469550527,
+    "message": "",
+    "status": 2,
+    "worker": 2,
+    "request": {
+      "Role": "compute-node",
+      "Script": "/etc/maas/ansible/do-ansible",
+      "Info": {
+        "script": "",
+        "role": "",
+        "role_selector": "",
+        "mac": "00:25:90:fa:5f:53",
+        "ip": "10.2.0.15",
+        "name": "bitter-prison.cord.lab",
+        "id": "node-fe205272-4a30-11e6-a48d-002590fa5f58"
+      }
+    }
+  },
+  {
+    "timestamp": 1468544505,
+    "message": "",
+    "status": 2,
+    "worker": 0,
+    "request": {
+      "Role": "fabric-switch",
+      "Script": "/etc/maas/ansible/do-switch",
+      "Info": {
+        "script": "/etc/maas/ansible/do-switch",
+        "role": "fabric-switch",
+        "role_selector": "",
+        "mac": "cc:37:ab:7c:ba:da",
+        "ip": "10.2.0.5",
+        "name": "leaf-2",
+        "id": "cc:37:ab:7c:ba:da"
+      }
+    }
+  }
+]
+```
+
+##### GET /provision/{id}
+Fetches the provisioning request and state for a single specified ID. The
+result is a single JSON object as described below:
+
+|Name|Type|Description|
+|-|-|-|
+|timestamp|number|time that the request was made|
+|message|string|error message if the request failed|
+|status|number|the status of the request, 0=pending,1=provisioning,2=complete,3=failed|
+|worker|number|internal identifier of the worker that executed the provisioning request|
+|request.Role|string|actual role used for the request|
+|request.Script|string|actual script used for the request|
+|request.Info|object|the original request made to the provisioner|
+
+```
+{
+  "timestamp": 1469550527,
+  "message": "",
+  "status": 2,
+  "worker": 2,
+  "request": {
+    "Role": "compute-node",
+    "Script": "/etc/maas/ansible/do-ansible",
+    "Info": {
+      "script": "",
+      "role": "",
+      "role_selector": "",
+      "mac": "00:25:90:fa:5f:53",
+      "ip": "10.2.0.15",
+      "name": "bitter-prison.cord.lab",
+      "id": "node-fe205272-4a30-11e6-a48d-002590fa5f58"
+    }
+  }
+}
+```
+
+##### DELETE /provision/{id}
+Removes a request from the provisioner. If the request is inflight the request
+will be completed before it is removed.
+
+## Switchq
+** Docker image:** cord-maas-switchq
+
+### Configuration
+|Environment Variable|Default|Description|
+|-|-|-|
+|SWITCHQ_VENDORS_URL|"file:///switchq/vendors.json"|URL from which a structure can be read that identifies the supported vendor OUIs|
+|SWITCHQ_STORAGE_URL|"memory:"|URL that specifies where the service should maintain its state|
+|SWITCHQ_ADDRESS_URL|"file:///switchq/dhcp_harvest.inc"|URL from which the service should obtain device IP / MAC information for known devices|
+|SWITCHQ_POLL_INTERVAL|"1m"|Interval at which a check should be made for new devices|
+|SWITCHQ_PROVISION_TTL|"1h"|how often the switches will be re-provisioned|
+|SWITCHQ_PROVISION_URL|""|the URL on which to contact the provisioner to make provisioning requests|
+|SWITCHQ_ROLE_SELECTOR_URL|""|URL of a service that can be queried to determine the role that should be used for a given node, else the default is used|
+|SWITCHQ_DEFAULT_ROLE|"fabric-switch"|the default role to be used if no selection URL is specified|
+|SWITCHQ_SCRIPT|"do-ansible"|script to execute for a provisioning event|
+|SWITCHQ_LOG_LEVEL|"warning"|Level of logging messages to display|
+|SWITCHQ_LOG_FORMAT|"text"|Format of the log messages|
+
+### REST Resources
+None
+
+## Allocator
+**Docker image:** cord-ip-allocator
+
+### Configuration
+|Environment Variable|Default|Description|
+|-|-|-|
+|ALLOCATE_PORT|"4242"|port on which to listen for requests|
+|ALLOCATE_LISTEN|"0.0.0.0"|IP address on which to listen for requests|
+|ALLOCATE_NETWORK|"10.0.0.0/24"|Subnet from which address should be allocated|
+|ALLOCATE_SKIP|"1"|number of host addresses to skip in the subnet before allocation range|
+|ALLOCATE_LOG_LEVEL|"warning"|Level of logging messages to display|
+|ALLOCATE_LOG_FORMAT|"text"|Format of the log messages|
+
+### REST Resources
+|URI|Operation|Description|
+|-|-|-|
+|/allocations/{mac}|DELETE|delete allocation for a specific MAC|
+|/allocations/{mac}|GET|return the allocation for a specific MAC|
+|/allocations/|GET|return the list of all allocations|
+|/addresses/{ip}|DELETE|delete the allocation associated with a specific IP|
+
+##### DELETE /allocations/{mac}
+If the specified MAC address is associated with an IP address this allocation /
+association is deleted.
+
+##### GET /allocations/{mac}
+Returns the IP address associated with the specified MAC, if no association
+exists then an IP address is allocated form the range, associated with the MAC,
+and returned.
+
+|Name|Type|Description|
+|-|-|-|
+|Ip|string|IP address associated with the specified MAC|
+|MAC|string|MAC address|
+
+Example:
+```
+{
+  "Ip": "10.6.1.4",
+  "Mac": "00:25:90:fa:5f:79"
+}
+```
+
+##### GET /allocations/
+Returns a list of all known MAC to IP associations.
+
+|Name|Type|Description|
+|-|-|-|
+|Ip|string|IP address associated with the specified MAC|
+|MAC|string|MAC address|
+
+Example:
+```
+[
+  {
+    "Ip": "10.6.1.4",
+    "Mac": "00:25:90:fa:5f:79"
+  },
+  {
+    "Ip": "10.6.1.2",
+    "Mac": "00:25:90:fa:5f:53"
+  },
+  {
+    "Ip": "10.6.1.3",
+    "Mac": "00:25:90:fa:5f:4f"
+  }
+]
+```
+
+##### DELETE /allocations/{ip}
+If the specified IP is associated with a MAC address this association is
+deleted.
+
+## Harvester
+**Docker image:** cord-dhcp-harvester
+
+### Configuration
+|Command Line Flag|Default|Description|
+|-|-|-|
+|'-l', '--leases'|'/dhcp/dhcpd.leases'|specifies the DHCP lease file from which to harvest|
+|'-x', '--reservations'|'/etc/dhcp/dhcpd.reservations'|specified the reservation file as ISC DHCP doesn't update the lease file for fixed addresses|
+|'-d', '--dest'|'/bind/dhcp_harvest.inc'|specifies the file to write the additional DNS information|
+|'-i', '--include'|None|list of hostnames to include when harvesting DNS information|
+|'-f', '--filter'|None|list of regex expressions to use as an include filter|
+|'-r', '--repeat'|None|continues to harvest DHCP information every specified interval|
+|'-c', '--command'|'rndc'|shell command to execute to cause reload|
+|'-k', '--key'|None|rndc key file to use to access DNS server|
+|'-s', '--server'|'127.0.0.1'|server to reload after generating updated dns information|
+|'-p', '--port'|'954'|port on server to contact to reload server|
+|'-z', '--zone'|None|zone to reload after generating updated dns information|
+|'-u', '--update'|False, action='store_true'|update the DNS server, by reloading the zone|
+|'-y', '--verify'|False, action='store_true'|verify the hosts with a ping before pushing them to DNS|
+|'-t', '--timeout'|'1s'|specifies the duration to wait for a verification ping from a host|
+|'-a', '--apiserver'|'0.0.0.0'|specifies the interfaces on which to listen for API requests|
+|'-e', '--apiport'|'8954'|specifies the port on which to listen for API requests|
+|'-q', '--quiet'|'1m'|specifieds a minimum quiet period between actually harvest times.|
+|'-w', '--workers'|5|specifies the number of workers to use when verifying IP addresses|
+
+### REST Resources
+
+|URI|Operation|Description|
+|-|-|-|
+|/harvest|POST|Forces the service to perform an IP harvest against the DHCP
+server and update DNS|
+
+##### POST /harvest
+The service periodically harvests IP information from the specified DHCP
+server and updates DNS zones accordingly. This request force an harvest to
+be performed immediately.
+
+## config-generator
+**Docker image:** cord-config-generator
+
+### Configuration
+|Environment Variable|Default|Description|
+|-|-|-|
+|CONFIGGEN_PORT|"8181"|port on which to contact ONOS|
+|CONFIGGEN_IP|"127.0.0.1"|IP on which to contact ONOS|
+|CONFIGGEN_SWITCHCOUNT|"0"|number of switches expected to be found in ONOS when generating a configuration|
+|CONFIGGEN_HOSTCOUNT|"0"|number of hosts expected to be found in ONOS when generating a configuration|
+|CONFIGGEN_USERNAME|"karaf"|username to use when invoking requests to ONOS|
+|CONFIGGEN_PASSWORD|"karaf"|password to use when invoking request to ONOS|
+|CONFIGGEN_LOGLEVEL|"warning"|Level of logging messages to display|
+|CONFIGGEN_LOGFORMAT|"text"|Format of the log messages|
+|CONFIGGEN_CONFIGSERVERPORT|"1337"|port on which to listen for configuration generation requests|
+|CONFIGGEN_CONFIGSERVERIP|"127.0.0.1"|IP on which to listen for configuration generation requests|
+
+### REST Resources
+|URI|Operation|Description|
+|-|-|-|
+|/config/|POST|Generates and returns a CORD leaf - spine configuration file|
+
+##### POST /config/
+This request will interigate the specified ONOS instance and if the expected
+number of switches and hosts are in ONOS it will then generate a configuration
+file suitable to use for the CORD leaf - spine fabric and return that to the
+caller.
diff --git a/HINTS.md b/HINTS.md
new file mode 100644
index 0000000..5586a52
--- /dev/null
+++ b/HINTS.md
@@ -0,0 +1,131 @@
+# HINTS
+This document contains hints and trouble shooting tips that might be helpful
+when deploying a CORD POD. This tips are specific to the automation of the
+deployment, which is the focus of this repository.
+
+## Micro services
+Automation of the CORD POD is driven by a set of micro services run in Docker
+containers. Information about the configuration and REST API for those micro
+services can be found in the [API Document](API.md).
+
+## Useful Script
+While is is possible to get the IP address of a container using the `docker
+inspect` command, this can cause a lot of typing. The following script can
+be useful to use to quickly determine the IP address of a container.
+
+Save the script to a file such as `/usr/local/bin/docker-ip` and then you can
+get the IP address of the container using `docker-ip <name>` or embeded it
+in other commands such as curl by doing
+`curl -sS http://$(docker-ip <name>):4243/provision/`
+
+```
+#!/bin/bash
+
+test $# -ne 1 && echo "must specify the name of a container" && exit 1
+
+IP=$(docker inspect --format '{{.NetworkSettings.IPAddress}}' $1)
+
+if [ "$IP x" == " x" ]; then
+  IP=$(docker inspect --format '{{.NetworkSettings.Networks.maas_default.IPAddress}}' $1)
+fi
+
+/bin/echo -ne $IP
+```
+
+## Viewing Provisioning Logs
+The logs for the provisioning of a compute node or switch can be found on the
+head node in `/etc/maas/ansible/logs`. The files are named with the provision
+requests ID and the suffix `.log`.
+
+These files can be useful when attempting to understand why provisioning may
+be failing.
+
+## Debugging Provisioning
+The scripts used for provisioning and the ansible roles can be found in
+`/etc/maas/ansible`. When debugging or understanding provisioning issues
+it can be useful to edit these files so that further debug information will
+be included in the provisioning log files.
+
+## Force a Re-Provisioning of Switch or Compute Node
+The provisioning state of nodes is managed via `Consul` which is a distributed
+key value store. The backing for storage is the host file system and thus the
+state is persisted over restarts.
+
+Provisioning is configured by default to only perform a provisioning once. This
+means after the initial provisioning is complete devices will not re-provision
+unless a compute node is re-deployed or a switch is forced to provision.
+
+The easiest way to force a re-provision is to delete the provisioning record
+from the provision micro serice, then on the next cycle of automation either
+the `automation` or `switchq` micro service will notice that no provisioning
+record exists for the node and will re-invoke the provisioning.
+
+To delete a provisioning record, first locate the ID of the record you wish
+to delete. This can be done by quering all provisioning records and finding
+the ID in the result.
+
+1. Use `docker inspect` to discover the IP address of the provisioner
+```
+docker inspect --format '{{.NetworkSettings.Networks.maas_default.IPAddress}}'  provisioner
+```
+
+2. Query the list of provisioning records
+```
+curl -sS http://172.19.0.3:4243/provision/ | jq .
+```
+
+3. Delete the request
+```
+curl -sS -XDELETE http://172.18.0.3:4243/provosion/{id}
+```
+
+## Force DNS Information Harvesting
+Periodically IP to host name mapping information is harvested from the DHCP
+server and the DNS server is updated. It can be useful sometime, particularly
+during the VM creation in the `deployPlatform` phase of CORD to force this
+collection. To do so you can leverage the API to the IP harvester.
+
+```
+curl -sS -XPOST $(docker inspect --format '{{.NetworkSettings.Networks.maas_default.IPAddress}}'  harvester):8954/harvest
+```
+
+This call will update the file `/etc/bind/maas/dhcp_harvester.inc` and have
+`bind` reload its configuration files.
+
+## Restart Automation
+To restart all of the automation containers you can use `docker-compose`. The
+following commands will kill all the containers, pull any updated images from
+the docker repository, and then restart the containers with the new images.
+```
+docker-compose -f /etc/maas/automation-compose.yml kill
+docker-compose -f /etc/maas/automation-compose.yml rm --all -f
+docker-compose -f /etc/maas/automation-compose.yml pull
+docker-compose -f /etc/maas/automation-compose.yml up -d
+```
+
+You can operate on a specific contianer only by specifying the container
+name at the end of the command, such as the following.
+```
+docker-compose -f /etc/maas/automation-compose.yml kill provisioner
+```
+## Restart harvester
+The harvester micro service is controlled separately from the rest of the
+automation micro services. It is controlled via the docker compose file
+`/etc/maas/harvester-compose.yml`.
+
+## Micro Service Logs
+To view the logs for a given container you can either use docker directly
+against the container name / ID, e.g.,
+```
+docker logs provisioner
+```
+against the entire collection of containers, e.g.,
+```
+docker-compose -f /etc/maas/automation-compose.yml logs
+```
+or againse a specific container in the collection, e.g.,
+```
+docker-compose -f /etc/maas/automation-compose.yml logs provisioner
+```
+
+This can be useful to look for errors in the logs.
diff --git a/QUICKSTART.md b/QUICKSTART.md
deleted file mode 100644
index 4d1eefc..0000000
--- a/QUICKSTART.md
+++ /dev/null
@@ -1,238 +0,0 @@
-# Quick Start
-This guide is meant to enable the user to quickly exercise the capabilities provided by the artifacts of this
-repository. There are three high level tasks that can be exercised:
-   - Create development environment
-   - Build / Tag / Publish Docker images that support bare metal provisioning
-   - Deploy the bare metal provisioning capabilities to a virtual machine (head node) and PXE boot a compute node
-
-**Prerequisite: Vagrant is installed and operationally.**
-_Note: This quick start guide has only been tested against Vagrant + VirtualBox, specially on MacOS._
-
-## Create Development Environment
-The development environment is required for the other tasks in this repository. The other tasks could technically
-be done outside this Vagrant based development environment, but it would be left to the user to ensure
-connectivity and required tools are installed. It is far easier to leverage the Vagrant based environment.
-
-### Create Development Machine
-To create the development machine the following single Vagrant command can be used. This will create an Ubuntu
-14.04 LTS based virtual machine and install some basic required packages, such as Docker, Docker Compose, and
-Oracle Java 8.
-```
-vagrant up maasdev
-```
-
-### Connect to the Development Machine
-To connect to the development machine the following vagrant command can be used.
-```
-vagrant ssh maasdev -- -L 8888:10.100.198.202:80
-```
-
-__Enter the complete command specified above, including the options `-- -L 8888:10.100.198.202:80`. These are used
-for port forwarding in order to make the MAAS UI visible from you local host and will be explained further in the
-section on Verifying MAAS.__
-
-### Complete
-Once you have created and connected to the development environment this task is complete. The `maas` repository
-files can be found on the development machine under `/maasdev`. This directory is mounted from the host machine
-so changes made to files in this directory will be reflected on the host machine and vice-versa.
-
-## Build / Tag / Publish Docker Images
-Bare metal provisioning leverages three (3) utilities built and packaged as Docker container images. These 
-utilities are:
-
-   - cord-maas-bootstrap - (directory: bootstrap) run at MAAS installation time to customize the MAAS instance
-     via REST interfaces
-   - cord-maas-automation - (directory: automation) run on the head node to automate PXE booted servers
-     through the MAAS bare metal deployment work flow
-   - cord-dhcp-harvester - (directory: harvester) run on the head node to facilitate CORD / DHCP / DNS
-     integration so that all hosts can be resolved via DNS
-
-### Build
-
-Each of the Docker images can be built using a command of the form `./gradlew build<Util>Image`, where `<Util>`
-can be `Bootstrap`, `Automation`, or `Harvester`. Building is the process of creating a local Docker image
-for each utility.
-
-_NOTE: The first time you run `./gradlew` it will download from the Internet the `gradle` binary and install it
-locally. This is a one time operation._
-
-```
-./gradlew buildBootstrapImage
-./gradlew buildAutomationImage
-./gradlew buildHarvester
-```
-
-Additionally, you can build all the images by issuing the following command:
-
-```
-./gradlew buildImages
-```
-
-### Tag
-
-Each of the Docker images can be tagged using a command of the form `./gradlew tag<Util>Image`, where `<Util>`
-can be `Bootstrap`, `Automation`, or `Harvester`. Tagging is the process of applying a local name and version to 
-the utility Docker images.
-
-_NOTE: The first time you run `./gradlew` it will download from the Internet the `gradle` binary and install it
-locally. This is a one time operation._
-
-```
-./gradlew tagBootstrapImage
-./gradlew tagAutomationImage
-./gradlew tagHarvester
-```
-
-Additionally, you can tag all the images by issuing the following command:
-
-```
-./gradlew tagImages
-```
-
-### Publish
-
-Each of the Docker images can be published using a command of the form `./gradlew publish<Util>Image`, where
-`<Util>` can be `Bootstrap`, `Automation`, or `Harvester`. Publishing is the process of uploading the locally
-named and tagged Docker image to a local Docker image registry.
-
-_NOTE: The first time you run `./gradlew` it will download from the Internet the `gradle` binary and install it
-locally. This is a one time operation._
-
-```
-./gradlew publishBootstrapImage
-./gradlew publishAutomationImage
-./gradlew publishHarvester
-```
-
-Additionally, you can publish all the images by issuing the following command:
-
-```
-./gradlew publishImages
-```
-
-### Complete
-Once you have built, tagged, and published the utility Docker images this task is complete.
-
-## Deploy Bare Metal Provisioning Capabilities
-There are three parts to deploying bare metal: deploying the head node PXE server (`MAAS`), PXE
-booting a compute node, and post deployment provisioning of the compute node. These tasks are accomplished
-utilizing additionally Vagrant machines as well as executing `gradle` tasks in the Vagrant development machine.
-
-### Create and Deploy MAAS into Head Node
-The first task is to create the Vagrant base head node. This will create an additional Ubutu virtual
-machine. **This task is executed on your host machine and not in the development virtual machine.** To create
-the head node Vagrant machine issue the following command:
-
-```
-vagrant up headnode
-```
-
-### Deploy MAAS
-Canonical MAAS provides the PXE and other bare metal provisioning services for CORD and will be deployed on the
-head node via `Ansible`. To initiate this deployment issue the following `gradle` command. This `gradle` command
-executes `ansible-playbook -i 10.100.198.202, --skip-tags=switch_support,interface_config --extra-vars=external_iface=eth0`.
-
-The IP address, `10.100.198.202` is the IP address assigned to the head node on a private network. The
-`skip-tags` option excludes Ansible tasks not required when utilizing the Vagrant based head node. The
-`extra-vars` option overrides the default role vars for the external interface and is needed for the VirtualBox
-based environment. Traffic from the compute nodes will be NAT-ed through this interface on the head node.
-
-The default MAAS deployment does not support power management for virtual box based hosts. As part of the MAAS
-installation support was added for power management, but it does require some additional configuration. This
-additional configuration is detailed below, but is mentioned here because when deploying the head node an
-additional parameter must be set. This parameter specified the username on the host machine that should be
-used when SSHing from the head node to the host machine to remotely execute the `vboxmanage` command. This
-is typically the username used when logging into your laptop or desktop development machine. This should
-be specified on the deploy command line using the `-PvboxUser` option.
-
-```
-./gradlew -PvboxUser=<username> deploy
-```
-
-This task can take some time so be patient. It should complete without errors, so if an error is encountered
-something when horrible wrong (tm). 
-
-### Verifying MAAS
-
-After the Ansible script is complete the MAAS install can be validated by viewing the MAAS UI. When we
-connected to the `maasdev` Vagrant machine the flags `-- -L 8888:10.100.198.202:80` were added to the end of
-the `vagrant ssh` command. These flags inform Vagrant to expose port `80` on machine `10.100.198.202`
-as port `8888` on your local machine. Essentially, expose the MAAS UI on port `8888` on your local machine.
-To view the MAAS UI simply browser to `http://localhost:8888/MAAS`. 
-
-You can login to MAAS using the user name `cord` and the password `cord`.
-
-Browse around the UI and get familiar with MAAS via documentation at `http://maas.io`
-
-** WAIT **
-
-Before moving on MAAS need to download boot images for Ubuntu. These are the files required to PXE boot
-additional servers. This can take several minutes. To view that status of this operation you can visit
-the URL `http://localhost:8888/MAAS/images/`. When the downloading of images is complete it is possible to 
-go to the next step.
-
-### What Just Happened?
-
-The proposed configuration for a CORD POD is has the following network configuration on the head node:
-
-   - eth0 / eth1 - 40G interfaces, not relevant for the test environment.
-   - eth2 - the interface on which the head node supports PXE boots and is an internally interface to which all
-            the compute nodes connected
-   - eth3 - WAN link. the head node will NAT from eth2 to eth3
-   - mgmtbr - Not associated with a physical network and used to connect in the VM created by the openstack
-              install that is part of XOS
-
-The Ansible scripts configure MAAS to support DHCP/DNS/PXE on the eth2 and mgmtbr interfaces.
-
-### Create and Boot Compute Node
-To create a compute node you use the following vagrant command. This command will create a VM that PXE boots
-to the interface on which the MAAS server is listening. **This task is executed on your host machine and not
-in the development virtual machine.**
-```
-vagrant up computenode
-```
-
-Vagrant will create a UI, which will popup on your screen, so that the PXE boot process of the compute node can be
-visually monitored. After an initial PXE boot of the compute node it will automatically be shutdown.
-
-The compute node Vagrant machine it s bit different that most Vagrant machine because it is not created
-with a user account to which Vagrant can connect, which is the normal behavior of Vagrant. Instead the
-Vagrant files is configured with a _dummy_ `communicator` which will fail causing the following error
-to be displayed, but the compute node Vagrant machine will still have been created correctly.
-```
-The requested communicator 'none' could not be found.
-Please verify the name is correct and try again.
-```
-
-The compute node VM will boot, register with MAAS, and then be shut off. After this is complete an entry
-for the node will be in the MAAS UI at `http://localhost:8888/MAAS/#/nodes`. It will be given a random
-hostname made up, in the Canonical way, of a adjective and an noun, such as `popular-feast.cord.lab`. _The 
-name will be different for everyone._ The new node will be in the `New` state.
-
-If you have properly configured power management for virtualbox (see below) the host will be automatically
-transitioned from `New` through the start of `Comissioning` and `Acquired` to `Deployed`.
-
-#### Virtual Box Power Management
-Virtual box power management is implemented via helper scripts that SSH to the virtual box host and 
-execute `vboxmanage` commands. For this to work The scripts must be configured with a username and host
-to utilize when SSHing and that account must allow SSH from the head node guest to the host using
-SSH keys such that no password entry is required.
-
-To enable SSH key based login, assuming that VirtualBox is running on a Linux based system, you can copy
-the MAAS ssh public key from `/var/lib/maas/.ssh/id_rsa.pub` on the head known to your accounts `authorized_keys`
-files. You can verify that this is working by issuing the following commands from your host machine:
-```
-vagrant ssh headnode
-sudo su - maas
-ssh yourusername@host_ip_address
-```
-
-If you are able to accomplish these commands the VirtualBox power management should operate correctly.
-
-### Post Deployment Provisioning of the Compute Node
-Once the node is in the `Deployed` state, it will be provisioned for use in a CORD POD by the execution of 
-an `Ansible` playbook.
-
-### Complete
-Once the compute node is in the `Deployed` state and post deployment provisioning on the compute node is
-complete, this task is complete.
diff --git a/QUICKSTART_PHYSICAL.md b/QUICKSTART_PHYSICAL.md
deleted file mode 100644
index 7410137..0000000
--- a/QUICKSTART_PHYSICAL.md
+++ /dev/null
@@ -1,247 +0,0 @@
-# Quick Start for Physical CORD POD
-This guide is meant to enable the user to utilize the artifacts of this
-repository to to deploy CORD on to a physical hardware rack. The artifacts in
-this repository will deploy CORD against a standard physical rack wired
-according to the **best practices** as defined in this document.
-
-## Physical configuration
-![Physical Hardware Connectivity](doc/images/physical.png)
-
-As depicted in the diagram above the base model for the CORD POD deployment
-contains:
-- 4 OF switches comprising the leaf - spine fabric utilized for data traffic
-- 4 compute nodes with with 2 40G ports and 2 1G ports
-- 1 top of rack (TOR) switch utilized for management communications
-
-The best practices in terms of connecting the components of the CORD POD
-include:
-- Leaf nodes are connected to the spines nodes starting at the highest port
-number on the leaf.
-- For a given leaf node, its connection to the spine nodes terminate on the
-same port number on each spine.
-- Leaf *n* connections to spine nodes terminate at port *n* on each spine
-node.
-- Leaf spine switches are connected into the management TOR starting from the
-highest port number.
-- Compute nodes 40G interfaces are named *eth0* and *eth1*.
-- Compute nodes 10G interfaces are named *eth2* and *eth3*.
-- Compute node *n* is connected to the management TOR switch on port *n*,
-egressing from the compute node at *eth2*.
-- Compute node *n* is connected to its primary leaf, egressing at *eth0* and terminating on the leaf at port *n*.
-- Compute node *n* is connected to its secondary leaf, egressing at *eth1* and
-terminating on the leaf at port *n*.
-- *eth3* on the head node is the uplink from the POD to the Internet.
-
-The following assumptions are made about the phyical CORD POD being deployed:
-- The leaf - spine switchs are Accton 6712s
-- The compute nodes are using 40G Intel NIC cards
-- The compute node that is to be designated the *head node* has
-Ubuntu 14.04 LTS installed.
-
-## Bootstrapping the Head Node
-The head node is the key to the physical deployment of a CORD POD. The
-automated deployment of the physical POD is designed such that the head node is
-manually deployed, with the aid of automation tools, such as Ansible and from
-this head node the rest of the POD deployment is automated.
-
-The head node can be deployed either from a node outside the CORD POD or by
-deploying from the head the head node. The procedure in each scenario is
-slightly different because during the bootstrapping of the head node it is
-possible that the interfaces needed to be renamed and the system to be
-rebooted.
-
-### Bootstrapping the Head Node from Outside the POD (OtP)
-To deploy the head node it is assumed that the node is reachable from outside the POD over its *eth3* interface and that the machine from which you are
-bootstrapping the head node has [`Vagrant`](https://www.vagrantup.com/) and [`git`](https://git-scm.com/) installed.
-
-**NOTE:** *This quick start walk through assumes that the head node is being
-deployed from the Vagrant machine that is defined within the repository. It is
-possible to deployment the head node from the cloned repository without
-using a Vagrant VM along as [`Ansible`](https://www.ansible.com/) version > 2.0 is installed on the OtP
-host. When doing a deployment without the Vagrant VM, just invoke the given
-Ansible commands directly from on the OtP host.*
-
-#### Cloning the Repository
-To clone the repository select a location on the outside the POD (OtP) host and
-issue the `git` command to download (clone) the repository.
-```
-$ git clone http://gerrit.opencord.org/maas
-```
-When this is complete, a listing (`ls`) of this directory should yield output
-similar to:
-```
-$ ls
-QUICKSTART.md           automation/             doc/                    harvester/              roles/
-QUICKSTART_PHYSICAL.md  bootstrap/              fabric.yml              head-node.yml           scripts/
-README.md               build.gradle            gradle/                 head.yml
-Vagrantfile             compute.yml             gradlew*                host_vars/
-ansible/                dev-head-node.yml       gradlew.bat             hosts
-```
-
-#### Starting the Vagrant Machine
-To start and connect to the the Vagrant machine, issue the following commands:
-```
-$ vagrant up maasdev
-$ vagrant ssh maasdev
-```
-**NOTE:** *It may have several minutes for the first command `vagrant up maasdev` to complete as it will include creating the VM as well as downloading
-and installing various software packages.*
-
-Once connected to the Vagrant machine, you can find the deployment artifacts
-in the `/maasdev` directory on the VM.
-```
-cd /maasdev
-```
-
-#### Invoke Bootstrapping
-The head node will be bootstrapped using Ansible and the playbook
-`head-node.yml`. This playbook is small and defines the Ansible role for the
-head node:
-```
-- hosts: all
-  serial: 1
-  roles:
-    - head-node
-```
-The `head-node` role depend on the Ansible `compute-node` role as well as
-others, the important point being that a head node is simply a compute node
-with some extra stuff installed.
-
-To bootstrap the head node the following command can be issues:
-```
-$ ansible-playbook -i <ip-of-head-node>, --ask-pass --ask-sudo-pass \
-  --user=<deployment-user-id> --extra-vars='fabric_ip=<fabric-ip> \
-  management_ip=<management-ip> --exeternal_ip=<external-ip>' head-node.yml
-```
-
-##### Playbook Options
-**NOTE** *The comma (,) after the <ip-of-head-node> is important as it
-informs Ansible that the option is the IP of the head node and not an
-inventory file that contains the list of IPs for the Ansible managed nodes.*
-
-replace `<ip-of-head-node>` with the actually IP address to the host
-on which the head node is being deployed and `<deployment-user-id>` with a
-user ID on the host which can be used to `ssh` from the OtP host to the
-head node and has `sudo` rights on the head node.
-
-During the bootstrapping of the nodes various network settings are modified.
-The `extra-vars` settings to the `ansible-playbook` command allow these
-settings to be specified. The values for the `extra-vars` can be one of the
-following:
-- dhcp - assumes that the address will be assigned via DHCP
-- manual - assumes that the address will be assigned manually
-- a.b.c.d/# - specifies the IP address of the interface as well as the number
-of bits in the netmask.
-
-These values are used to configure the interfaces and will result in changes
-to the `/etc/network/interface` file on the head node.
-
-If you do not wish for the deployment scripts to modify the network
-configuration of the head node you can substitute the `ansible-playbook` option `--skip-tags=interface_config` for the `extra-vars` options.
-
-After you invoke the `ansible-playbook` you will be prompted for the `ssh` and
-`sudo` passwords for the the remote user. In most cases these are the same.
-
-The `ansible-playbook` will take several minutes to complete as it does
-roughly the following:
-1. Download and install Docker and Docker Compose
-1. Rename and configured the network interfaces
-1. Reboot the system to apply the network changes
-1. Download boot images for the Accton switches
-1. Download and install Canonical's Metal as a Service (MAAS) software
-1. Configure MAAS
-1. Download and invoke Docker images to support automation of MAAS
-   capabilities
-
-#### Wait for Image Download
-As part of the bootstrapping and configuration of MAAS, downloading of boot
-images for the other compute nodes is initiated. Before the other compute
-nodes can be booted the download of this image must be completed.
-
-To verify the status of the image download you can visit the MAAS UI at `http://<ip-of-head-node>/MAAS` and select the `Images` tab. On this page
-the status of the download will be visible.
-
-#### Complete
-Once the download of boot image for the compute nodes is complete, the head
-node is boot strapped and you can proceed to the section [Booting the Rest
-of the POD]().
-
-### Bootstrapping the Head Node from the Head Node
-In order to bootstrap the head node from the head node Ansible, version >= 2,
-must be installed on the head node. Additionally the following files /
-directories from the repository must be on head node:
-- `roles` (and all its sub-directories and files)
-- `head-node.yml`
-
-#### Invoke on the Head Node (OtH) Bootstrapping
-Once Ansible is installed on the head node and the proper files are availble,
-the head node can be bootstrapped. Because the bootstrapping capabilities
-modify the network interface configuration of the head node, when
-bootstrapping OtH must be done in two steps as there is a system reboot in
-the middle. The first step is provisioning the head node as as a compute
-node and the second is provisioning it with the head node capabilities.
-
-### Bootstrap Head Node as Compute Node
-To complete the first phase of the bootstrapping the head node, provisioning
-as a compute node, the following command can be used:
-```
-$ ansible-playbook -i <ip-of-head-node>, --ask-pass --ask-sudo-pass \
-  --user=<deployment-user-id> --extra-vars='fabric_ip=<fabric-ip> \
-  management_ip=<management-ip> --exeternal_ip=<external-ip>' compute-node.yml
-```
-
-(see [Playbook Options](#playbook-options) for a description of the parameters)
-
-If you do not wish to have the Ansible playbook to modify the network
-configuration of the host you can add the `--skip-tags=interface_config`
-option to the `ansible-playbook` command line.
-
-If you do not wish the system to auto reboot if the network configuration is
-modified you can add the `--skip-tags=reboot` option to the
-`ansible-playbook` command line.
-
-**NOTE:** *If the network parameters have changed the head node will likely
-need to be rebooted for those changes to take effect. If you would like to
-understand the changes before the reboot, you can specify the
-`--skip-tags=reboot` option and then run the follow diff command:*
-```
-diff /etc/network/interfaces.1 /etc/network/interfaces
-```
-
-**NOTE:** *Be sure to reboot the head node after network changes so that they
-will be applied.*
-
-### Bootstrap Head Node as Head Node
-Once the head node has been provisioned as a compute node the head node
-capabilities can be overlaid. This can be done using the following command
-line:
-```
-$ ansible-playbook -i <ip-of-head-node>, --ask-pass --ask-sudo-pass \
-  --user=<deployment-user-id> --extra-vars='fabric_ip=<fabric-ip> \
-  management_ip=<management-ip> --exeternal_ip=<external-ip>' --skip-tags=interface_config head-node.yml
-```
-(see [Playbook Options](#playbook-options) for a description of the parameters)
-
-The `ansible-playbook` will take several minutes to complete as it does
-roughly the following:
-1. Download and install Docker and Docker Compose
-1. Rename and configured the network interfaces
-1. Reboot the system to apply the network changes
-1. Download boot images for the Accton switches
-1. Download and install Canonical's Metal as a Service (MAAS) software
-1. Configure MAAS
-1. Download and invoke Docker images to support automation of MAAS
-   capabilities
-
-#### Wait for Image Download
-As part of the bootstrapping and configuration of MAAS, downloading of boot
-images for the other compute nodes is initiated. Before the other compute
-nodes can be booted the download of this image must be completed.
-
-To verify the status of the image download you can visit the MAAS UI at `http://<ip-of-head-node>/MAAS` and select the `Images` tab. On this page
-the status of the download will be visible.
-
-#### Complete
-Once the download of boot image for the compute nodes is complete, the head
-node is boot strapped and you can proceed to the section [Booting the Rest
-of the POD]().
diff --git a/doc/images/uservices.graffle b/doc/images/uservices.graffle
new file mode 100644
index 0000000..a40aa1d
--- /dev/null
+++ b/doc/images/uservices.graffle
Binary files differ
diff --git a/doc/images/uservices.png b/doc/images/uservices.png
new file mode 100644
index 0000000..e2e3c32
--- /dev/null
+++ b/doc/images/uservices.png
Binary files differ