Refactoring documentation
Change-Id: Ia023db7642928b0a04e9bfff3859a8e564e771b7
diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md
index 944eb79..c9df43e 100644
--- a/docs/SUMMARY.md
+++ b/docs/SUMMARY.md
@@ -1,9 +1,10 @@
# Summary
* [Guide Overview](overview.md)
+* [Terminology](terminology.md)
* [Building and Installing CORD](README.md)
- * [CORD-in-a-Box (Quick Start)](quickstart.md)
- * [Physical POD (Quick Start)](quickstart_physical.md)
+ * [Quickstarts](quickstarts.md)
+ * [Installing CORD-in-a-Box](install_ciab.md)
* [Installing a Physical POD](install_pod.md)
* [Appendix: Network Settings](appendix_network_settings.md)
* [Appendix: Basic Configuration](appendix_basic_config.md)
@@ -23,6 +24,7 @@
* [Implementation Details](xos/dev/sync_impl.md)
* [Migrating Models to 4.0](xos/migrate-4.0.md)
* [Developing for CORD](develop.md)
+ * [Getting the CORD source code](cord_repo.md)
* [Workflow: platform-install](platform-install/README.md)
* [Workflow: local dev](xos/dev/local_env.md)
* [Example Service](xos/example_service.md)
diff --git a/docs/appendix_network_settings.md b/docs/appendix_network_settings.md
index 29351d4..89d68fe 100644
--- a/docs/appendix_network_settings.md
+++ b/docs/appendix_network_settings.md
@@ -15,29 +15,28 @@
When deciding which interfaces are in this bond, the deployment script selects the list of available interfaces and filters them on the criteria below. The output is the list of interfaces that should be associated with the bond interface. The resultant list is sorted alphabetically. Finally, the interfaces are configured to be in the bond interface with the first interface in the list being the primary.
-The network configuration can be customized before deploying, using a set of variables that can be set in your deployment configuration file, for example `podX.yml`, in the dev VM, under `/cord/build/config`.
-Below an example of the so called “extraVars” section is reported:
+The network configuration can be customized before deploying, using a set of variables that can be set in your deployment configuration file, for example `podX.yml`, in the dev VM, under `/cord/build/podconfig`.
+Below, an example of most commonly used network variables is reported:
```
-extraVars:
- - 'fabric_include_names=<name1>,<name2>'
- - 'fabric_include_module_types=<mod1>,<mod2>'
- - 'fabric_include_bus_types=<bus1>,<bus2>'
- - 'fabric_exclude_names=<name1>,<name2>'
- - 'fabric_exclude_module_types=<mod1>,<mod2>'
- - 'fabric_exclude_bus_types=<bus1>,<bus2>'
- - 'fabric_ignore_names=<name1>,<name2>'
- - 'fabric_ignore_module_types=<mod1>,<mod2>'
- - 'fabric_ignore_bus_types=<bus1>,<bus2>'
- - 'management_include_names=<name1>,<name2>'
- - 'management_include_module_types=<mod1>,<mod2>'
- - 'management_include_bus_types=<bus1>,<bus2>'
- - 'management_exclude_names=<name1>,<name2>'
- - 'management_exclude_module_types=<mod1>,<mod2>'
- - 'management_exclude_bus_types=<bus1>,<bus2>'
- - 'management_ignore_names=<name1>,<name2>'
- - 'management_ignore_module_types=<mod1>,<mod2>'
- - 'management_ignore_bus_types=<bus1>,<bus2>'
+'fabric_include_names'='<name1>,<name2>'
+'fabric_include_module_types'='<mod1>,<mod2>'
+'fabric_include_bus_types'='<bus1>,<bus2>'
+'fabric_exclude_names'='<name1>,<name2>'
+'fabric_exclude_module_types'='<mod1>,<mod2>'
+'fabric_exclude_bus_types'='<bus1>,<bus2>'
+'fabric_ignore_names'='<name1>,<name2>'
+'fabric_ignore_module_types'='<mod1>,<mod2>'
+'fabric_ignore_bus_types'='<bus1>,<bus2>'
+'management_include_names'='<name1>,<name2>'
+'management_include_module_types'='<mod1>,<mod2>'
+'management_include_bus_types'='<bus1>,<bus2>'
+'management_exclude_names'='<name1>,<name2>'
+'management_exclude_module_types'='<mod1>,<mod2>'
+'management_exclude_bus_types'='<bus1>,<bus2>'
+'management_ignore_names'='<name1>,<name2>'
+'management_ignore_module_types'='<mod1>,<mod2>'
+'management_ignore_bus_types'='<bus1>,<bus2>'
```
Each of the criteria is specified as a comma separated list of regular expressions.
diff --git a/docs/appendix_vsg.md b/docs/appendix_vsg.md
index d53a9f8..df974bb 100644
--- a/docs/appendix_vsg.md
+++ b/docs/appendix_vsg.md
@@ -56,12 +56,13 @@
* Run the netcfg command. Verify that the updated gateway information is present under publicGateways:
```
-"publicGateways" : [ {
- "gatewayIp" : "10.6.1.193",
- "gatewayMac" : "02:42:0a:06:01:01"
- }, {
- "gatewayIp" : "10.6.1.129",
- "gatewayMac" : "02:42:0a:06:01:01"
- } ],
- ```
-
+"publicGateways" : [
+ {
+ "gatewayIp" : "10.6.1.193",
+ "gatewayMac" : "02:42:0a:06:01:01"
+ }, {
+ "gatewayIp" : "10.6.1.129",
+ "gatewayMac" : "02:42:0a:06:01:01"
+ }
+],
+```
diff --git a/docs/book.json b/docs/book.json
index 388b9e5..f141a04 100644
--- a/docs/book.json
+++ b/docs/book.json
@@ -4,5 +4,30 @@
"structure": {
"summary": "SUMMARY.md"
},
- "plugins": ["toggle-chapters"]
+ "variables": {
+ "branch": "master"
+ },
+ "plugins": [
+ "toggle-chapters",
+ "versions-select"
+ ],
+ "pluginsConfig": {
+ "versions": {
+ "gitbookConfigURL": "https://raw.githubusercontent.com/opencord/cord/master/docs/book.json",
+ "options": [
+ {
+ "value": "http://guide.opencord.org",
+ "text": "Master"
+ },
+ {
+ "value": "http://guide.opencord.org/400",
+ "text": "4.0"
+ },
+ {
+ "value": "http://wiki.opencord.org",
+ "text": "3.0 and older"
+ }
+ ]
+ }
+ }
}
diff --git a/docs/build_internals.md b/docs/build_internals.md
index abc6d69..7e02727 100644
--- a/docs/build_internals.md
+++ b/docs/build_internals.md
@@ -23,10 +23,8 @@
It can be downloaded via:
-```
-curl -o ~/cord-bootstrap.sh https://raw.githubusercontent.com/opencord/cord/master/scripts/cord-bootstrap.sh
-chmod +x cord-bootstrap.sh
-```
+<pre><code>curl -o ~/cord-bootstrap.sh https://raw.githubusercontent.com/opencord/cord/{{ book.branch }}/scripts/cord-bootstrap.sh
+chmod +x cord-bootstrap.sh</code></pre>
The bootstrap script has the following useful options:
@@ -66,14 +64,13 @@
#### Examples: cord-boostrap.sh
-Download source code and prep for a local build by installing docker
+Download source code and prep for a local build by installing Docker
```
./cord-bootstrap.sh -d
```
-A `rcord-local` build from master. Note that the make targets may not run if
-you aren't already in the `docker` group, so you'd need to logout/login and
+An `rcord-local` config is built from the {{ book.branch }} branch. Note that the make targets may not run if you aren't already in the `docker` group, so you'd need to logout/login and
rerun them.
```
@@ -86,7 +83,7 @@
./cord-bootstrap.sh -v -p orchestration/xos:1000/1
```
-A virtual rcord pod, with tests run afterward. Assumes that you're already in
+A virtual rcord pod, with tests runs afterward. Assumes that you're already in
the `libvirtd` group:
```
@@ -104,11 +101,10 @@
Downloading the source tree can be done by running:
-```
-mkdir cord && cd cord
-repo init -u https://gerrit.opencord.org/manifest -b master
-repo sync
-```
+<pre><code>mkdir cord && \
+cd cord && \
+repo init -u https://gerrit.opencord.org/manifest -b {{ book.branch }} && \
+repo sync</code></pre>
The build system can be found in the `cord/build/` directory.
diff --git a/docs/cord_repo.md b/docs/cord_repo.md
new file mode 100644
index 0000000..4aedda4
--- /dev/null
+++ b/docs/cord_repo.md
@@ -0,0 +1,26 @@
+#Getting the CORD source code
+
+##Install repo
+Repo is a tool from Google that help us managing the code base.
+
+```
+curl https://storage.googleapis.com/git-repo-downloads/repo > ~/repo && \
+sudo chmod a+x repo && \
+sudo cp repo /usr/bin
+```
+
+##Download the CORD repositories
+
+<pre><code>mkdir ~/cord && \
+cd ~/cord && \
+repo init -u https://gerrit.opencord.org/manifest -b {{ book.branch }} && \
+repo sync</code></pre>
+
+>NOTE: master is used as example. You can substitute it with your favorite branch, for example cord-4.0 or cord-3.0. You can also use a "flavor" specific manifests such as “mcord” or “ecord”. The flavor you use here is not correlated to the profile you will choose to run later but it is suggested that you use the corresponding manifest for the deployment you want. AN example is to use the “ecord” profile and then deploy the ecord.yml service\_profile.
+
+When this is complete, a listing (`ls`) inside this directory should yield output similar to:
+
+```
+ls -F
+build/ incubator/ onos-apps/ orchestration/ test/
+```
diff --git a/docs/quickstart.md b/docs/install_ciab.md
similarity index 92%
rename from docs/quickstart.md
rename to docs/install_ciab.md
index 451afa7..63952b4 100644
--- a/docs/quickstart.md
+++ b/docs/install_ciab.md
@@ -1,32 +1,27 @@
-# CORD-in-a-Box: Quick Start Guide
+# Installing CORD-in-a-Box (CiaB)
This guide walks through the steps to bring up a demonstration CORD
"POD", running in virtual machines on a single physical server (a.k.a.
-"CORD-in-a-Box" or just "CiaB"). The purpose of this demonstration POD is to enable those
-interested in understanding how CORD works to examine and interact with a
-running CORD environment. It is a good place for novice CORD users to start.
+"CORD-in-a-Box" or just "CiaB"). The purpose of this demonstration POD is to enable those interested in understanding how CORD works to examine and interact with a running CORD environment. It is a good place for novice CORD users to start.
-**NOTE:** *This guide describes how to install
-a simplified version of a CORD POD on a
-single server using virtual machines. If you are looking for instructions on
-how to install a multi-node POD, you will find them in the
-[Physical POD Guide](./quickstart_physical.md). For more details about the
-actual build process, look there.*
+>NOTE: Looking for a quick list of essential build commands? You can find it [here](quickstarts.md)
+
+>NOTE: This guide describes how to install a simplified version of a CORD POD on a single server using virtual machines. If you are looking for instructions on how to install a multi-node POD, you will find them in the
+[Physical POD installtion guide](install_pod.md).
## What you need (prerequisites)
You will need a *target server*, which will run both a build environment
in a Vagrant VM (used to deploy CORD) as well as CiaB itself.
-Target server requirements:
+### Target server requirements:
* 64-bit server, with
- * 32GB+ RAM
- * 8+ CPU cores
+ * 48GB+ RAM
+ * 12+ CPU cores
* 200GB+ disk
-* Access to the Internet
-* Ubuntu 14.04 LTS freshly installed (see [TBF]() for instruction on how to
- install Ubuntu 14.04).
+* Access to the Internet (no enterprise proxies)
+* Ubuntu 14.04 LTS freshly installed
* User account used to install CORD-in-a-Box has password-less *sudo*
capability (e.g., like the `ubuntu` user)
@@ -37,10 +32,8 @@
account using your organization's email address and choose "Join Existing
Project"; for "Project Name" enter `cord-testdrive`.
-**NOTE:** *CloudLab is supporting CORD as a courtesy. It is expected that you
-will not use CloudLab resources for purposes other than evaluating CORD. If,
-after a week or two, you wish to continue using CloudLab to experiment with or
-develop CORD, then you must apply for your own separate CloudLab project.*
+>NOTE: CloudLab is supporting CORD as a courtesy. It is expected that you will not use CloudLab resources for purposes other than evaluating CORD. If, after a week or two, you wish to continue using CloudLab to experiment with or
+develop CORD, then you must apply for your own separate CloudLab project.
Once your account is approved, start an experiment using the
`OnePC-Ubuntu14.04.5` profile on the Wisconsin, Clemson, or Utah clusters.
@@ -62,11 +55,10 @@
On the target server, download the script that bootstraps the build process and run it:
-```
-wget https://raw.githubusercontent.com/opencord/cord/master/scripts/cord-bootstrap.sh
-chmod +x cord-bootstrap.sh
-~/cord-bootstrap.sh -v
-```
+<pre><code>cd ~ && \
+wget https://raw.githubusercontent.com/opencord/cord/{{ book.branch }}/scripts/cord-bootstrap.sh && \
+chmod +x cord-bootstrap.sh && \
+~/cord-bootstrap.sh -v</code></pre>
This script installs software dependencies (e.g., Ansible, Vagrant) as well as the CORD source code (in `~/cord`).
@@ -81,7 +73,7 @@
```
will check out the `platform-install` repo with changeset 1233, revision 4, and
-`xos` repo changeset 1234, revision 2. Note that the `-p` option
+`xos` repo changeset 1234, revision 2. Note that the `-p` option
will only have an effect the first time the `cord-bootstrap.sh` script is run.
You can also just run the `repo` command directly to download patch sets.
@@ -100,7 +92,7 @@
The output of the build will be displayed, as well as saved in `~/build.out`.
Also logs for individual steps of the build are stored in `~/cord/build/logs`.
-**NOTE:** *If you are connecting to a remote target server, it is highly
+>NOTE: If you are connecting to a remote target server, it is highly
recommended that you run the above commands in a `tmux` session, or
use `mosh` to connect to the target rather than `ssh`. Without one of these,
interrupted connectivity between your local machine and the remote server
@@ -120,7 +112,6 @@
The output of the tests will be displayed, as well as stored in `~/cord/build/logs`.
-
## Inspecting CiaB
CiaB creates a virtual CORD POD running inside Vagrant VMs, using
@@ -165,7 +156,7 @@
The `corddev` VM is a build machine used
to drive the installation. It downloads and builds Docker containers and
-publishes them to the virtual head node (see below). It then installs MaaS on
+publishes them to the virtual head node (see below). It then installs MAAS on
the virtual head node (for bare-metal provisioning) and the ONOS, XOS, and
OpenStack services in containers. This VM can be entered as follows:
@@ -279,7 +270,7 @@
The `compute1` VM is the virtual compute node controlled by OpenStack.
This VM can be entered from the `head1` VM. Run `cord prov list` to get the
-node name (assigned by MaaS). The node name will be something like
+node name (assigned by MAAS). The node name will be something like
`bony-alley.cord.lab`; in this case, to login you'd run:
```
@@ -322,13 +313,13 @@
```
-### MaaS GUI
+### MAAS GUI
-You can access the MaaS (Metal-as-a-Service) GUI by pointing your browser to
+You can access the MAAS (Metal-as-a-Service) GUI by pointing your browser to
the URL `http://<target-server>:8080/MAAS/`. E.g., if you are running on CloudLab,
your `<target-server>` is the hostname of your CloudLab node.
The username is `cord` and the auto-generated password is found in `~/cord/build/maas/passwords/maas_user.txt` on the CiaB server.
-For more information on MaaS, see [the MaaS documentation](http://maas.io/docs).
+For more information on MAAS, see [the MAAS documentation](http://maas.io/docs).
### XOS GUI
@@ -347,7 +338,8 @@
Here is a sample output:
![subscriber-service-graph.png](subscriber-service-graph.png)
-_NOTE that the `Service Graph` will need to be detangled. You can organize the nodes by dragging them around._
+
+>NOTE: the `Service Graph` will need to be detangled. You can organize the nodes by dragging them around.
### Kibana log viewing GUI
diff --git a/docs/OFFLINE_INSTALL.md b/docs/install_offline.md
similarity index 100%
rename from docs/OFFLINE_INSTALL.md
rename to docs/install_offline.md
diff --git a/docs/install_pod.md b/docs/install_pod.md
index 924eceb..5c76efc 100644
--- a/docs/install_pod.md
+++ b/docs/install_pod.md
@@ -1,60 +1,16 @@
-#Installing a Physical POD
+# Installing a Physical POD
The following is a detailed, step-by-step recipe for installing a physical POD.
->NOTE: If you are new to CORD and would like to get familiar with it, you should
->start by bringing up a development POD on a single physical server, called
->[CORD-in-a-Box](quickstart.md).
+>NOTE: Looking for a quick list of essential build commands? You can find it [here](quickstarts.md)
->NOTE: Also see the [Quick Start: Physical POD](quickstart_physical.md) Guide
->for a streamlined overview of the physical POD install process.
-
-##Terminology
-
-This guide uses the following terminology.
-
-* **POD**: A single physical deployment of CORD.
-
-* **Full POD**: A typical configuration, and is used as example in this Guide.
-A full CORD POD is composed by three servers, and four fabric switches.
-It makes it possibile to experiment with all the core features of CORD and it
-is what the community uses for tests.
-
-* **Half POD**: A minimum-sized configuration. It is similar to a full POD, but with less hardware. It consists of two servers (one head node and one compute node), and one fabric switch. It does not allow experimentation with all of the core features that
-CORD offers (e.g., a switching fabric), but it is still good for basic experimentation and testing.
-
-* **Development (Dev) / Management Node**: This is the machine used
-to download, build and deploy CORD onto a POD.
-Sometimes it is a dedicated server, and sometime the developer's laptop.
-In principle, it can be any machine that satisfies the hardware and software
-requirements reported below.
-
-* **Development (Dev) VM**: Bootstrapping the CORD installation requires a lot of
-software to be installed and some non-trivial configurations to be applied.
-All this should happens on the dev node.
-To help users with the process, CORD provides an easy way to create a
-VM on the dev node with all the required software and configurations in place.
-
-* **Head Node**: One of the servers in a POD that runs management services
-for the POD. This includes XOS (the orchestrator), two instances of ONOS
-(the SDN controller, one to control the underlay fabric, one to control the overlay),
-MaaS and all the services needed to automatically install and configure the rest of
-the POD devices.
-
-* **Compute Node(s)**: A server in a POD that run VMs or containers associated with
-one or more tenant services. This terminology is borrowed from OpenStack.
-
-* **Fabric Switch**: A switch in a POD that interconnects other switch and server
-elements inside the POD.
-
-* **vSG**: The virtual Subscriber Gateway (vSG) is the CORD counterpart for existing
-CPEs. It implements a bundle of subscriber-selected functions, such as Restricted Access, Parental Control, Bandwidth Metering, Access Diagnostics and Firewall. These functionalities run on commodity hardware located in the Central Office rather than on the customer’s premises. There is still a device in the home (which we still refer to as the CPE), but it has been reduced to a bare-metal switch.
+>NOTE: If you are new to CORD and would like to get familiar with it, you should start by bringing up a development POD on a single physical server, called [CORD-in-a-Box](install_ciab.md).
## Overview of a CORD POD
The following is a brief description of a generic full POD.
-###Physical Configuration
+### Physical Configuration
A full POD includes a ToR management switch, four fabric switches, and three
standard x86 servers. The following figure does not show access devices
@@ -63,7 +19,7 @@
<img src="images/physical-overview.png" alt="Drawing" style="width: 400px;"/>
-###Logical Configuration: Data Plane Network
+### Logical Configuration: Data Plane Network
The following diagram is a high level logical representation of a typical CORD POD.
@@ -76,7 +32,7 @@
switches form a leaf and spine fabric. The compute nodes and the head
node are connected to a port of one of the leaf switches.
-###Logical Configuration: Control Plane / Management Network
+### Logical Configuration: Control Plane / Management Network
The following diagram shows in blue how the components of the system are
connected through the management network.
@@ -86,7 +42,7 @@
As shown in this figure, the head node is the only server in the POD connected both
to Internet and to the other components of the system. The compute nodes and the switches are only connected to the head node, which provides them with all the software needed.
-##Sample Workflow
+## Sample Workflow
It is important to have a general picture of installation workflow before
getting into the details. The following is a list of high-level tasks involved
@@ -100,17 +56,17 @@
the related configurations.
* The software gets automatically deployed from the head node to the compute nodes.
-##Requirements
+## Requirements
While the CORD project is for openness and does not have any interest in sponsoring specific vendors, it provides a reference implementation for both hardware and software to help users in building their PODs. What is reported below is a list of hardware that, in the community experience, has worked well.
Also note that the CORD community will be better able to help you debugging issues if your hardware and software configuration look as much as possible similar to the ones reported in the reference implementation, below.
-##Bill Of Materials (BOM) / Hardware Requirements
+## Bill Of Materials (BOM) / Hardware Requirements
The section provides a list of hardware required to build a full CORD POD.
-###BOM Summary
+### BOM Summary
| Quantity | Category | Brand | Model | Part Num |
|--------|--------|------------|-------------------|-------------|
@@ -120,7 +76,7 @@
| 7 | Cabling (data plane) | Robofiber | QSFP-40G-03C | QSFP-40G-03C |
| 12 | Cabling (Mgmt) | CAT6 copper cables 3M) | * | * |
-###Detailed Requirements
+### Detailed Requirements
* 1x Development Machine. It can be either a physical machine or a virtual machine, as long as the VM supports nested virtualization. It doesn’t have to be necessarily Linux (used in the rest of the guide, below); in principle anything able to satisfy the hardware and the software requirements. Generic hardware requirements are 2 cores, 4G of memory, 60G of hdd.
@@ -143,7 +99,7 @@
* 1x 1G L2 copper management switch supporting VLANs or 2x 1G L2 copper management switches
-##Connectivity Requirements
+## Connectivity Requirements
The dev machine and the head node have to download software from
different Internet sources, so they currently need unfettered Internet access.
@@ -152,7 +108,7 @@
Sometimes firewalls, proxies, and software that prevents to access
local DNSs generate issues and should be avoided.
-##Cabling a POD
+## Cabling a POD
This section describes how the hardware components should be
interconnected to form a fully functional CORD POD.
@@ -177,7 +133,7 @@
>NOTE: Vendors often allow a shared management port to provide IPMI functionalities. One of the NICs used for system management (e.g., eth0) can be shared, to be used at the same time also as IPMI port.
-####External Network
+#### External Network
The external network allows POD servers to be reached from the
Internet. This would likely not be supported in a production system,
@@ -196,7 +152,7 @@
* Compute node 1 - 1x IPMI/BMC interface (optional, but recommended)
* Compute node 2 - 1x IPMI/BMC interface (optional, but recommended)
-####Internal Network
+#### Internal Network
The internal/management network is separate from the external one. It has the goal to connect the head node to the rest of the system components (compute nodes and fabric switches). For a typical POD, the internal network includes:
@@ -208,7 +164,7 @@
* Fabric 3 - management interface
* Fabric 4 - management interface
-###User / Data Plane Network
+### User / Data Plane Network
The data plane network (represented in red in the figure) carries user traffic (in green), from the access devices to the point the POD connects to the metro network.
@@ -225,7 +181,7 @@
* Access devices - 1 or more 40G interfaces
*Metro devices - 1 or more 40G interfaces
-###Best Practices
+### Best Practices
The community follows a set of best practices to better be able to remotely debug issues, for example via mailing-lists. The following is not mandatory, but is strongly suggested:
@@ -252,93 +208,39 @@
Only the dev machine and the head node need to be prepped for installation.
The other machines will be fully provisioned by CORD itself.
-###Development Machine
+### Development Machine
-It should run Ubuntu 16.04 LTS (suggested) or Ubuntu 14.04 LTS. Then
-install and configure the following software.
+It should run either Ubuntu 16.04 LTS (recommended) or Ubuntu 14.04 LTS.
-####Install Basic Packages
+A script is provided to help you bootstrapping your dev machine and download the CORD repositories.
+
+<pre><code>cd ~ && \
+curl -o ~/cord-bootstrap.sh https://raw.githubusercontent.com/opencord/cord/{{ book.branch }}/scripts/cord-bootstrap.sh && \
+chmod +x cord-bootstrap.sh && \
+./cord-bootstrap.sh -v</code></pre>
+
+After the script successfully runs, logout and login again to make the user becomes part of the libvirtd group.
+
+At this stage a cord directory should be in the cord user home directory.
+
+### Head Node
+
+It should run Ubuntu 14.04 LTS.
+Then, configure the following.
+
+#### Create a User with "sudoer" permissions (no password)
```
-sudo apt-get -y install git python
+sudo adduser cord && \
+sudo adduser cord sudo && \
+echo 'cord ALL=(ALL) NOPASSWD:ALL' | sudo tee --append /etc/sudoers.d/90-cloud-init-users
```
-####Install repo
+### Compute Nodes
-```
-curl https://storage.googleapis.com/git-repo-downloads/repo > ~/repo &&
-sudo chmod a+x repo &&
-sudo cp repo /usr/bin
-```
-
-####Configure git
-
-Using the email address registered on Gerrit:
-
-```
-git config --global user.email "you@example.com"
-git config --global user.name "Your Name"
-```
-
-####Virtualbox and Vagrant
-
-```
-sudo apt-get install virtualbox vagrant
-```
-
->NOTE: Make sure the version of Vagrant that gets installed is >=1.8 (can be checked using vagrant --version)
-
-###Head Node
-
-It should run Ubuntu 14.04 LTS. Then install and configure the
-following software.
-
-####Install Basic Packages
-
-```
-sudo apt-get -y install curl jq
-```
-
-####Install Oracle Java8
-
-```
-sudo apt-get install software-properties-common -y &&
-sudo add-apt-repository ppa:webupd8team/java -y &&
-sudo apt-get update &&
-echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections &&
-sudo apt-get install oracle-java8-installer oracle-java8-set-default -y
-```
-
-####Create a User with "sudoer" Permissions (no password)
-
-```
-sudo adduser cord &&
-sudo adduser cord sudo &&
-sudo echo 'cord ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/90-cloud-init-users
-```
-
-####Copy Your Dev Node ssh Public-Key
-
-On the head node:
-
-```
-ssh-keygen -t rsa &&
-mkdir /home/cord/.ssh/authorized_keys &&
-chmod 700 /home/cord/.ssh &&
-chmod 600 /home/cord/.ssh/authorized_keys
-```
-
-From the dev node:
-
-```
-cat ~/.ssh/id_rsa.pub | ssh cord@{head_node_ip} 'cat >> ~/.ssh/authorized_keys'
-```
-
-###Compute Nodes
-
-The CORD build process installs the compute nodes. You only need to
-configure their BIOS settings so they can PXE boot from the head node
-through the internal ( management) network. In doing this, make sure:
+The CORD build process installs the compute nodes. The only thing to be
+configured are the BIOS settings, so that they can PXE boot from the head node
+through the internal (management) network. In doing this, make sure that:
* The network card connected to the internal / management network is configured with DHCP (no static IPs).
@@ -348,138 +250,11 @@
>NOTE: Some users prefer to connect as well the IPMI interfaces of the compute nodes to the external network, so they can have control on them also from outside the POD. This way the head node will be able to control them anyway.
-###Fabric Switches: ONIE
+### Fabric switches: ONIE
The ONIE installer should be already installed on the switch and set to boot in installation mode. This is usually the default for new switches sold without an Operating System. It might not be the case instead if switches have already an Operating System installed. In this case rebooting the switch in ONIE installation mode depends by different factors, such the version of the OS installed and the specific model of the switch.
-###Download Software onto the Dev Machine
-
-From the home directory, use `repo` to clone the CORD repository:
-
-```
-mkdir cord && cd cord &&
-repo init -u https://gerrit.opencord.org/manifest -b master &&
-repo sync
-```
-
->NOTE: master is used as example. You can substitute it with your favorite branch, for example cord-2.0 or cord-3.0. You can also use a "flavor" specific manifests such as “mcord” or “ecord”. The flavor you use here is not correlated to the profile you will choose to run later but it is suggested that you use the corresponding manifest for the deployment you want. AN example is to use the “ecord” profile and then deploy the ecord.yml service\_profile.
-
-When this is complete, a listing (`ls`) inside this directory should yield output similar to:
-
-```
-ls -F
-build/ incubator/ onos-apps/ orchestration/ test/
-```
-
-###Build the Dev VM
-
-Instead of installing the prerequisiste software by hand on the dev machine,
-the build environment leverages Vagrant to spawn a VM with the tools required to build and deploy CORD.
-To create the development machine the following Vagrant command can be used:
-
-```
-cd ~/cord/build
-vagrant up corddev
-```
-
-This will create an Ubuntu 14.04 LTS virtual machine and will install some required packages, such as Docker, Docker Compose, and Oracle Java 8.
-
->WARNING: Make sure the VM can obtain sufficient resources. It may takes several minutes for the first command vagrant up corddev to complete, as it will include creating the VM, as well as downloading and installing various software packages. Once the Vagrant VM is created and provisioned, you will see output ending with:
-
-```
-==> corddev: PLAY RECAP *********************************************************************
-==> corddev: localhost : ok=29 changed=25 unreachable=0 failed=0
-```
-
-The important thing is that the unreachable and failed counts are both zero.
-
->NOTE: From the moment the VM gets created, it shares a folder with the OS below (the one of the server or of your personal computer). This means that what was the installation root directory (~/cord), will be also available in the VM under /cord.
-
-###Log into the Dev VM
-
-From the build directory, run the following command to connect to the development VM created
-
-```
-vagrant ssh corddev
-```
-
-Once inside the VM, you can find the deployment artifacts in the `/cord` directory.
-
-In the VM, change to the `/cord/build` directory before continuing.
-
-```
-cd /cord/build
-```
-
-###Fetch Docker Images
-
-The fetching phase of the build process pulls Docker images from the public repository down to the VM, and clones the git submodules that are part of the project. This phase can be initiated with the following command:
-
-```
-./gradlew fetch
-```
-
->NOTE: The first time you run ./gradlew it will download the gradle binary from the Internet and installs it locally. This is a one time operation, but may be time consuming, depending on the speed of your Internet connection.
-
->WARNING: It is unfortunately fairly common to see this command fail due to network timeouts. If theis happens, be patient and run again the command.
-
-Once the fetch command has successfully run, the step is complete. After the command completes you should be able to see the Docker images that were downloaded using the docker images command on the development machine:
-
-```
-docker images
-REPOSITORY TAG IMAGE ID CREATED SIZE
-opencord/onos <none> e1ade494f06e 3 days ago 936.5 MB
-python 2.7-alpine c80455665c57 2 weeks ago 71.46 MB
-xosproject/xos-base <none> 2b791db4def0 4 weeks ago 756.4 MB
-redis <none> 74b99a81add5 11 weeks ago 182.8 MB
-xosproject/xos-postgres <none> 95312a611414 11 weeks ago 393.8 MB
-xosproject/cord-app-build <none> 003a1c20e34a 5 months ago 1.108 GB
-consul <none> 62f109a3299c 6 months ago 41.05 MB
-swarm <none> 47dc182ea74b 8 months ago 19.32 MB
-nginx <none> 3c69047c6034 8 months ago 182.7 MB
-xosproject/vsg <none> dd026689aff3 9 months ago 336 MB
-```
-
-###Build Docker Images
-
-Bare metal provisioning leverages utilities built and packaged as Docker container images. The images can be built by using the following command.
-
-```
-./gradlew buildImages
-```
-
-Once the `buildImages` command successfully runs the task is complete. The CORD artifacts have been built and the Docker images can be viewed by using the docker images command on the dev VM:
-
-```
-docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}\t{{.ID}}'
-REPOSITORY TAG SIZE IMAGE ID
-opencord/mavenrepo latest 338.2 MB 2e29009df740
-cord-maas-switchq latest 337.7 MB 73b084b48796
-cord-provisioner latest 822.4 MB bd26a7001dd8
-cord-dhcp-harvester latest 346.8 MB d3cfa30cf38c
-config-generator latest 278.4 MB e58059b1afb2
-cord-maas-bootstrap latest 359.4 MB c70c437c6039
-cord-maas-automation latest 371.8 MB 9757ac34e7f6
-cord-ip-allocator latest 276.5 MB 0f399f8389aa
-opencord/onos <none> 936.5 MB e1ade494f06e
-python 2.7-alpine 71.46 MB c80455665c57
-golang alpine 240.5 MB 00371bbb49d5
-golang 1.6-alpine 283 MB 1ea38172de32
-nginx latest 181.6 MB 01f818af747d
-xosproject/xos-base <none> 756.4 MB 2b791db4def0
-ubuntu 14.04 187.9 MB 3f755ca42730
-redis <none> 182.8 MB 74b99a81add5
-xosproject/xos-postgres <none> 393.8 MB 95312a611414
-xosproject/cord-app-build <none> 1.108 GB 003a1c20e34a
-consul <none> 41.05 MB 62f109a3299c
-swarm <none> 19.32 MB 47dc182ea74b
-nginx <none> 182.7 MB 3c69047c6034
-xosproject/vsg <none> 336 MB dd026689aff3
-```
-
->NOTE: not all the docker machines listed are created by the CORD project but are instead used as a base to create other images.
-
-## Prepare POD Configuration File
+## Prepare POD configuration file and generate the composite configuration
Each CORD POD deployment requires a POD configuration file that
describes how the system should be configured, including what IP
@@ -488,91 +263,31 @@
and much more.
POD configuration files are YAML files with extension .yml, contained
-in the `/cord/build/config` directory in the dev VM. You can either
+in the `/cord/build/podconfig` directory in the dev VM. You can either
create a new file with your favorite editor or copy-and-edit an
existing file. The `sample.yml` configuration file is there for this
-purpose. All parameters have a descriptions. Optional lines have been
-commented out, but can be used in case as needed.
+purpose. All parameters have a description. Optional lines have been
+commented out, but can be used as needed.
More information about how the network configuration for the POD can
be customized can be found in an Appendix: POD Network Settings.
-##Publish Docker Images to the Head Node
-
-Publishing consists of pushing the build docker images to the Docker repository on the target head node. This step can take a while as it has to transfer all the image from the development machine to the target head node. This step is started with the following command:
+Once the POD config yaml file has been created, the composite configuration file should be generated with the following command.
```
-./gradlew -PdeployConfig=config/podX.yml publish
+cd ~/cord/build && \
+make PODCONFIG={YOUR_PODCONFIG_FILE.yml} config
```
-Once the publish command successfully runs this task is complete. When this step is complete, a Docker registry has been created on the head node and the images built on the dev node have been published to the head node registry.
+The process generates a set of files in ~/cord/build/genconfig
->WARNING: This command sometimes fails for various reasons. Simply
->rerunning the command often solves the problem.
+>NOTE: Before the configuration process the ~/cord/build/genconfig directory contains a README.md file only.
-Verify that the containers are running, using the `docker ps` command on the head node.
-
-```
-docker ps --format 'table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}'
-CONTAINER ID IMAGE COMMAND CREATED AT
-c8dd48fc9d18 registry:2.4.0 "/bin/registry serve " 2016-12-02 11:49:12 -0800 PST
-e983d2e43760 registry:2.4.0 "/bin/registry serve " 2016-12-02 11:49:12 -0800 PST
-```
-
-Alternatively, the docker registry can be queried from any node that has access to the head node. You should be able to observe a list of docker images. Output may vary from deployment to deployment. The following is an example from an R-CORD deployment:
-
-```
-curl -sS http://head-node-ip-address:5000/v2/_catalog | jq .
-{
- "repositories": [
- "config-generator",
- "consul",
- "cord-dhcp-harvester",
- "cord-ip-allocator",
- "cord-maas-automation",
- "cord-maas-switchq",
- "cord-provisioner",
- "gliderlabs/consul-server",
- "gliderlabs/registrator",
- "mavenrepo",
- "nginx",
- "node",
- "onosproject/onos",
- "redis",
- "swarm",
- "xosproject/chameleon",
- "xosproject/exampleservice-synchronizer",
- "xosproject/fabric-synchronizer",
- "xosproject/gui-extension-rcord",
- "xosproject/gui-extension-vtr",
- "xosproject/onos-synchronizer",
- "xosproject/openstack-synchronizer",
- "xosproject/vrouter-synchronizer",
- "xosproject/vsg",
- "xosproject/vsg-synchronizer",
- "xosproject/vtn-synchronizer",
- "xosproject/vtr-synchronizer",
- "xosproject/xos",
- "xosproject/xos-client",
- "xosproject/xos-corebuilder",
- "xosproject/xos-gui",
- "xosproject/xos-postgres",
- "xosproject/xos-synchronizer-base",
- "xosproject/xos-ui",
- "xosproject/xos-ws"
- ]
-}
-```
-
->NOTE: This example uses the `curl` and `jq` to retrieve data
->and pretty print JSON. If your system doesn't have these commands
->installed, they can be installed using `sudo apt-get install -y curl jq`.
-
-##Head Node Deployment
+## Head node deployment
Head node deployment works as follows:
-* Makes the head node a MaaS server from which the other POD elements
+* Makes the head node a MAAS server from which the other POD elements
(fabric switches and compute nodes) can PXE boot (both to load their OS
and to be configured).
* Installs and configures the containers needed to configure other nodes of the network.
@@ -582,49 +297,27 @@
This step is started with the following command:
```
-./gradlew -PdeployConfig=config/podX.yml deploy
+cd ~/cord/build && \
+make build
```
->NOTE: Be patient: this step can take a couple hours to complete.
+>NOTE: Be patient: this step can take an hour to complete.
>WARNING: This command sometimes fails for various reasons.
>Simply re-running the command often solves the problem. If the command
->fails it’s better to start from a clean head node. Most of the time,
->re-starting from the publish step (which creates new containers on
->the head node) helps.
-
-If the process runs smoothly, the output should be similar to:
-
-```
-PLAY RECAP *********************************************************************
-localhost : ok=5 changed=2 unreachable=0 failed=0
-
-Monday 19 June 2017 22:59:22 +0000 (0:00:00.233) 0:00:03.370 ***********
-===============================================================================
-setup -------------------------------------------------------------------
-1.35s
-setup ------------------------------------------------------------------- 1.18s
-automation-integration : Template do-enlist-compute-node script to /etc/maas/ansible/do-enlist-compute-node --- 0.46s
-automation-integration : Have MAAS do-ansible script run do-enlist-compute-node script --- 0.23s
-Include variables ------------------------------------------------------- 0.12s
-:PIdeployPlatform
-:deploy
-
-BUILD SUCCESSFUL
-
-Total time: 57 mins 25.458 secs
-```
+>fails it’s better to start from a clean head node.
This step is complete when the command successfully runs.
-###MaaS
+### MAAS
-As previously mentioned, once the deployment is complete the head node becomes a MaaS region and rack controller, basically acting as a PXE server and serving images through the management network to compute nodes and fabric switches connected to it.
+As previously mentioned, once the deployment is complete the head node becomes a MAAS region and rack controller, basically acting as a PXE server and serving images through the management network to compute nodes and fabric switches connected to it.
The Web UI for MaaS can be viewed by browsing to the head node, using a URL of the from `http://head-node-ip-address/MAAS`.
To login to the web page, use `cord` as the username. If you have set a password in the deployment configuration password use that, otherwise the password used can be found in your build directory under `<base>/build/maas/passwords/maas_user.txt`.
-After the deploy command installs MAAS, MAAS itself initiates the download of an Ubuntu 14.04 boot image that will be used to boot the other POD devices. This download can take some time and the process cannot continue until the download is complete. The status of the download can be verified through the UI by visiting the URL `http://head-node-ip-address/MAAS/images/`, or via the command line from head node via the following command:
+
+After the deployment process finishes, MAAS initiates the download of an Ubuntu 14.04 boot image that will be used to boot the other POD devices. This download can take some time and the process cannot continue until the download is complete. The status of the download can be verified through the UI by visiting the URL `http://head-node-ip-address/MAAS/images/`, or via the command line from head node via the following command:
```
APIKEY=$(sudo maas-region-admin apikey --user=cord) && \
@@ -636,11 +329,11 @@
When the list is empty you can proceed.
-###Compute Node and Fabric Switch Deployment
+### Compute Node and Fabric Switch Deployment
The section describes how to provision and configure software on POD compute nodes and fabric switches.
-####General Workflow
+#### General Workflow
Once it has been verified that the Ubuntu boot image has been
downloaded, the compute nodes and the fabric switches may be PXE booted.
@@ -653,7 +346,7 @@
>configured as
>prescribed in the _Software Environment Requirements_ section.
-####Important Commands: cord harvest and cord prov
+#### Important Commands: cord harvest and cord prov
Two important commands are available to debug and check the status of
the provisioning. They can be used from the head node CLI.
@@ -689,7 +382,7 @@
Please refer to [Re-provision Compute Nodes and Switches](quickstart_physical.md)
for more details.
-####Static IP Assignment
+#### Static IP Assignment
If you want to assign a specific IP to either a compute node or a
fabric switch, it should be done before booting the device. This
@@ -707,7 +400,7 @@
}
```
-####Compute Nodes
+#### Compute Nodes
The compute node provisioning process installs the servers as
OpenStack compute nodes.
@@ -749,7 +442,7 @@
Once the post deployment provisioning on the compute node is complete, this task is complete.
-####Fabric Switches
+#### Fabric Switches
Similar to the compute nodes, the fabric switches will boot, register with MaaS, and then restart (eventually multiple times).
@@ -778,7 +471,7 @@
Your POD is now installed. You can now try to access the basic
services as described below.
-###ONOS (Underlay)
+### ONOS (Underlay)
A dedicated ONOS instance is installed on the head node to control the underlay infrastructure (the fabric). You can access it with password “rocks”
@@ -786,7 +479,7 @@
* Using the ONOS UI, at: `http://<head-node-ip>/fabric`
-###ONOS (Overlay)
+### ONOS (Overlay)
A dedicated ONOS instance is installed on the head node to control the overlay infrastructure (tenant networks). You can access it with password “rocks”
@@ -794,17 +487,15 @@
* Using the ONOS UI, at: `http://<head-node-ip>/vtn`
-###OpenStack
+### OpenStack
-###XOS UI
+### XOS UI
XOS is the cloud orchestrator that controls the entire POD. It allows
you to define new service and service dependencies.. You can access XOS:
* Using the XOS GUI at `http://<head-node-ip>/xos`
-* Using the XOS admin UI at `http://<head-node-ip>/admin/`
-
## Getting Help
If it seems that something has gone wrong with your setup, there are a number of ways that you can get help -- in the documentation on the OpenCORD wiki, on the OpenCORDSlack channel (get an invitation here), or on the CORD-discuss mailing list.
diff --git a/docs/quickstart_old.md b/docs/quickstart_old.md
deleted file mode 100644
index aefd4b5..0000000
--- a/docs/quickstart_old.md
+++ /dev/null
@@ -1,483 +0,0 @@
-# CORD-in-a-Box: Quick Start Guide
-
-This guide walks through the steps to bring up a demonstration CORD
-"POD", running in virtual machines on a single physical server (a.k.a.
-"CORD-in-a-Box"). The purpose of this demonstration POD is to enable those
-interested in understanding how CORD works to examine and interact with a
-running CORD environment. It is a good place for novice CORD users to start.
-
-**NOTE:** *This guide describes how to install
-a simplified version of a CORD POD on a
-single server using virtual machines. If you are looking for instructions on
-how to install a multi-node POD, you will find them in the
-[Physical POD Guide](./quickstart_physical.md). For more details about the
-actual build process, look there.*
-
-## What You Need (Prerequisites)
-
-You will need a *target server*, which will run both a development environment
-in a Vagrant VM (used to deploy CORD) as well as CORD-in-a-Box itself.
-
-Target server requirements:
-
-* 64-bit server, with
- * 32GB+ RAM
- * 8+ CPU cores
- * 200GB+ disk
-* Access to the Internet
-* Ubuntu 14.04 LTS freshly installed (see [TBF]() for instruction on how to
- install Ubuntu 14.04).
-* User account used to install CORD-in-a-Box has password-less *sudo*
- capability (e.g., like the `ubuntu` user)
-
-### Target Server on CloudLab (optional)
-
-If you do not have a target server available that meets the above requirements,
-you can borrow one on [CloudLab](https://www.cloudlab.us). Sign up for an
-account using your organization's email address and choose "Join Existing
-Project"; for "Project Name" enter `cord-testdrive`.
-
-**NOTE:** *CloudLab is supporting CORD as a courtesy. It is expected that you
-will not use CloudLab resources for purposes other than evaluating CORD. If,
-after a week or two, you wish to continue using CloudLab to experiment with or
-develop CORD, then you must apply for your own separate CloudLab project.*
-
-Once your account is approved, start an experiment using the
-`OnePC-Ubuntu14.04.5` profile on the Wisconsin, Clemson, or Utah clusters.
-This will provide you with a temporary target server meeting the above
-requirements.
-
-Refer to the [CloudLab documentation](https://docs.cloudlab.us) for more
-information.
-
-## Download and Run the Script
-
-On the target server, download the script that installs CORD-in-a-Box and run
-it. The script's output is displayed and also saved to `~/cord/install.out`:
-
-```
-curl -o ~/cord-in-a-box.sh https://raw.githubusercontent.com/opencord/cord/master/scripts/cord-in-a-box.sh
-bash ~/cord-in-a-box.sh -t
-```
-
-**NOTE:** *If you are connecting to a remote target server, it is highly
-recommended that you run the `cord-in-a-box.sh` script in a `tmux` session, or
-use `mosh` to connect to the target rather than `ssh`. Without one of these,
-interrupted connectivity between your local machine and the remote server
-may cause the CiaB install to hang.*
-
-The script takes a *long time* (at least two hours) to run. Be patient! If it
-hasn't completely failed yet, then assume all is well!
-
-### Complete
-
-The script builds the CORD-in-a-Box and runs a couple of tests to ensure that
-things are working as expected. Once it has finished running, you'll see a
-**BUILD SUCCESSFUL** message.
-
-The file `~/cord/install.out` contains the output of the build process,
-post-bootstrap phase.
-
-### Using cord-in-a-box.sh to download development code from Gerrit
-
-There is an `-b` option to cord-in-a-box.sh that will checkout a specific
-changeset from a gerrit repo during the run. The syntax for this is `<project
-path>:<changeset>/<revision>`. It can be used multiple times - for example:
-
-```
-bash ~/cord-in-a-box.sh -b build/platform-install:1233/4 -b orchestration/service-profile:1234/2"
-```
-
-will check out the `platform-install` repo with changeset 1233, revision 4, and
-`service-profile` repo changeset 1234, revision 2.
-
-You can find the project path used by the `repo` tool in the [manifest/default.xml](https://gerrit.opencord.org/gitweb?p=manifest.git;a=blob;f=default.xml) file.
-
-### Using cord-in-a-box.sh to run the CORD fabric
-
-The `-f` option to cord-in-a-box.sh can be used to configure an ONOS
-fabric for CORD-in-a-Box. The fabric consists of two leaf and two
-spine switches, each running a [CPqD OpenFlow software
-switch](https://github.com/CPqD/ofsoftswitch13) controlled by ONOS.
-The build process automatically generates a configuration file for the
-fabric and pushes it to ONOS. *THIS FEATURE IS EXPERIMENTAL AND STILL
-UNDER DEVELOPMENT.*
-
-## Inspecting CORD-in-a-Box
-
-CORD-in-a-Box creates a virtual CORD POD running inside Vagrant VMs, using
-libvirt as a backend.
-
-As access to the libvirt socket depends on being in the `libvirtd` group, you
-may need to to logout and back in to have your shell session gain this group
-membership:
-
-```
-~$ groups
-xos-PG0 root
-~$ vagrant status
-Call to virConnectOpen failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
-~$ logout
-~$ ssh node_name.cloudlab.us
-~$ groups
-xos-PG0 root libvirtd
-```
-
-Once you have done this, you can inspect the status of the VM's by setting the
-`VAGRANT_CWD` environmental variable to the path to the cord-in-a-box
-`Vagrantfile`'s parent directory, then run `vagrant status`:
-
-```
-~$ export VAGRANT_CWD=~/cord/build
-~$ vagrant status
-Current machine states:
-
-corddev running (libvirt)
-prod running (libvirt)
-switch not created (libvirt)
-leaf-1 running (libvirt)
-leaf-2 running (libvirt)
-spine-1 running (libvirt)
-spine-2 not created (libvirt)
-testbox not created (libvirt)
-compute-node-1 running (libvirt)
-compute-node-2 not created (libvirt)
-compute-node-3 not created (libvirt)
-
-This environment represents multiple VMs. The VMs are all listed
-above with their current state. For more information about a specific
-VM, run `vagrant status NAME`.
-```
-
-### corddev VM
-
-The `corddev` VM is a development machine used by the `cord-in-a-box.sh` script
-to drive the installation. It downloads and builds Docker containers and
-publishes them to the virtual head node (see below). It then installs MaaS on
-the virtual head node (for bare-metal provisioning) and the ONOS, XOS, and
-OpenStack services in containers. This VM can be entered as follows:
-
-```
-$ ssh corddev
-```
-
-The CORD build environment is located in `/cord/build` inside this VM. It is
-possible to manually run individual steps in the build process here if you
-wish; see the [Physical POD Guide](./quickstart_physical.md) for more
-information on how to run build steps.
-
-### prod VM
-
-The `prod` VM is the virtual head node of the POD. It runs the OpenStack,
-ONOS, and XOS services inside containers. It also simulates a subscriber
-devices using a container. To enter it, simply type:
-
-```
-$ ssh prod
-```
-
-Inside the VM, a number of services run in Docker and LXD containers.
-
-```
-vagrant@prod:~$ docker ps
-CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
-043ea433232c xosproject/xos-ui "python /opt/xos/mana" About an hour ago Up About an hour 8000/tcp, 0.0.0.0:8888->8888/tcp cordpod_xos_ui_1
-40b6b05be96c xosproject/xos-synchronizer-exampleservice "bash -c 'sleep 120; " About an hour ago Up About an hour 8000/tcp cordpod_xos_synchronizer_exampleservice_1
-cfd93633bfae xosproject/xos-synchronizer-vtr "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_vtr_1
-d2d2a0799ca0 xosproject/xos-synchronizer-vsg "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_vsg_1
-480b5e85e87d xosproject/xos-synchronizer-onos "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_onos_1
-9686909333c3 xosproject/xos-synchronizer-fabric "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_fabric_1
-de53b100ce20 xosproject/xos-synchronizer-openstack "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_openstack_1
-8a250162424c xosproject/xos-synchronizer-vtn "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_vtn_1
-f1bd21f98a9f xosproject/xos "python /opt/xos/mana" 2 hours ago Up 2 hours 0.0.0.0:81->81/tcp, 8000/tcp cordpodbs_xos_bootstrap_ui_1
-e41ccc63e7dd xosproject/xos "bash -c 'cd /opt/xos" 2 hours ago Up 2 hours 8000/tcp cordpodbs_xos_synchronizer_onboarding_1
-7fdeb35614e8 redis "docker-entrypoint.sh" 2 hours ago Up 2 hours 6379/tcp cordpodbs_xos_redis_1
-84fa440023bf xosproject/xos-postgres "/usr/lib/postgresql/" 2 hours ago Up 2 hours 5432/tcp cordpodbs_xos_db_1
-ef0dd85badf3 onosproject/onos:latest "./bin/onos-service" 2 hours ago Up 2 hours 0.0.0.0:6653->6653/tcp, 0.0.0.0:8101->8101/tcp, 0.0.0.0:8181->8181/tcp, 0.0.0.0:9876->9876/tcp onosfabric_xos-onos_1
-e2348ddee189 xos/onos "./bin/onos-service" 2 hours ago Up 2 hours 0.0.0.0:6654->6653/tcp, 0.0.0.0:8102->8101/tcp, 0.0.0.0:8182->8181/tcp, 0.0.0.0:9877->9876/tcp onoscord_xos-onos_1
-f487db716d8c docker-registry:5000/mavenrepo:candidate "nginx -g 'daemon off" 3 hours ago Up 3 hours 443/tcp, 0.0.0.0:8080->80/tcp mavenrepo
-0a24bcc3640a docker-registry:5000/cord-maas-automation:candidate "/go/bin/cord-maas-au" 3 hours ago Up 3 hours automation
-c5448fb834ac docker-registry:5000/cord-maas-switchq:candidate "/go/bin/switchq" 3 hours ago Up 3 hours 0.0.0.0:4244->4244/tcp switchq
-7690414fec4b docker-registry:5000/cord-provisioner:candidate "/go/bin/cord-provisi" 3 hours ago Up 3 hours 0.0.0.0:4243->4243/tcp provisioner
-833752cd8c71 docker-registry:5000/config-generator:candidate "/go/bin/config-gener" 3 hours ago Up 3 hours 1337/tcp, 0.0.0.0:4245->4245/tcp generator
-300df95eb6bd docker-registry:5000/consul:candidate "docker-entrypoint.sh" 3 hours ago Up 3 hours storage
-e0a68af23e9c docker-registry:5000/cord-ip-allocator:candidate "/go/bin/cord-ip-allo" 3 hours ago Up 3 hours 0.0.0.0:4242->4242/tcp allocator
-240a8b3e5af5 docker-registry:5000/cord-dhcp-harvester:candidate "/go/bin/harvester" 3 hours ago Up 3 hours 0.0.0.0:8954->8954/tcp harvester
-9444c39ffe10 registry:2.4.0 "/bin/registry serve " 3 hours ago Up 3 hours 0.0.0.0:5000->5000/tcp registry
-13d2f04e3b9b registry:2.4.0 "/bin/registry serve " 3 hours ago Up 3 hours 0.0.0.0:5001->5000/tcp registry-mirror
-```
-
-The above shows Docker containers launched by XOS (image names starting with
-`xosproject`). Containers starting with `onos` are running ONOS. There is
-also a Docker image registry, a Maven repository containing the CORD ONOS apps,
-and a number of microservices used in bare-metal provisioning.
-
-```
-vagrant@prod:~$ sudo lxc list
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| ceilometer-1 | RUNNING | 10.1.0.4 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| glance-1 | RUNNING | 10.1.0.5 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| juju-1 | RUNNING | 10.1.0.3 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| keystone-1 | RUNNING | 10.1.0.6 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| mongodb-1 | RUNNING | 10.1.0.13 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| nagios-1 | RUNNING | 10.1.0.8 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| neutron-api-1 | RUNNING | 10.1.0.9 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| nova-cloud-controller-1 | RUNNING | 10.1.0.10 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| openstack-dashboard-1 | RUNNING | 10.1.0.11 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| percona-cluster-1 | RUNNING | 10.1.0.7 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| rabbitmq-server-1 | RUNNING | 10.1.0.12 (eth0) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-| testclient | RUNNING | 192.168.0.244 (eth0.222.111) | | PERSISTENT | 0 |
-+-------------------------+---------+------------------------------+------+------------+-----------+
-```
-
-The LXD containers ending with names ending with `-1` are running
-OpenStack-related services. These containers can be
-entered as follows:
-
-```
-$ ssh ubuntu@<container-name>
-```
-
-The `testclient` container runs the simulated subscriber device used
-for running simple end-to-end connectivity tests. Its only connectivity is
-to the vSG, but it can be entered using:
-
-```
-$ sudo lxc exec testclient bash
-```
-
-### compute_node-1 VM
-
-The `compute_node-1` VM is the virtual compute node controlled by OpenStack.
-This VM can be entered from the `prod` VM. Run `cord prov list` to get the
-node name (assigned by MaaS). The node name will be something like
-`bony-alley.cord.lab`; in this case, to login you'd run:
-
-```
-$ ssh ubuntu@bony-alley
-```
-
-Virtual machines created via XOS/OpenStack will be instantiated on this
-compute node. To login to an OpenStack VM, first get the management IP
-address (172.27.0.x):
-
-```
-vagrant@prod:~$ source /opt/cord_profile/admin-openrc.sh
-vagrant@prod:~$ nova list --all-tenants
-+--------------------------------------+-------------------------+--------+------------+-------------+---------------------------------------------------+
-| ID | Name | Status | Task State | Power State | Networks |
-+--------------------------------------+-------------------------+--------+------------+-------------+---------------------------------------------------+
-| 3ba837a0-81ff-47b5-8f03-020175eed6b3 | mysite_exampleservice-2 | ACTIVE | - | Running | management=172.27.0.3; public=10.6.1.194 |
-| 549ffc1e-c454-4ef8-9df7-b02ab692eb36 | mysite_vsg-1 | ACTIVE | - | Running | management=172.27.0.2; mysite_vsg-access=10.0.2.2 |
-+--------------------------------------+-------------------------+--------+------------+-------------+---------------------------------------------------+
-```
-
-The VM hosting the vSG is called `mysite_vsg-1` and we see it has a management IP of 172.27.0.2.
-Then run `ssh-agent` and add the default key (used to access the OpenStack VMs):
-
-```
-vagrant@prod:~$ ssh-agent bash
-vagrant@prod:~$ ssh-add
-```
-
-SSH to the compute node with the `-A` option and then to the VM using the
-management IP obtained above. So if the compute node name is `bony-alley` and
-the management IP is 172.27.0.2:
-
-```
-vagrant@prod:~$ ssh -A ubuntu@bony-alley
-ubuntu@bony-alley:~$ ssh ubuntu@172.27.0.2
-
-# Now you're inside the mysite-vsg-1 VM
-ubuntu@mysite-vsg-1:~$
-```
-
-### leaf-[12] and spine-[12] VMs
-
-These VMs run software switches for the CORD fabric. In the default
-configuration they run standard Linux bridges. If you have chosen to run
-cord-in-a-box.sh with the experimental `-f` option, the VMs run CPqD switches
-controlled by ONOS running in the `onosfabric_xos-onos_1` container.
-
-### MaaS GUI
-
-You can access the MaaS (Metal-as-a-Service) GUI by pointing your browser to
-the URL `http://<target-server>:8080/MAAS/`. E.g., if you are running on CloudLab,
-your `<target-server>` is the hostname of your CloudLab node.
-The username is `cord` and the auto-generated password is found in `~/cord/build/maas/passwords/maas_user.txt`.
-For more information on MaaS, see [the MaaS documentation](http://maas.io/docs).
-
-### XOS GUI
-
-You can access the XOS GUI by pointing your browser to URL
-`http://<target-server>:8080/xos/`. The username is
-`xosadmin@opencord.org` and the auto-generated password is found in
-`~/cord/build/platform-install/credentials/xosadmin@opencord.org`.
-
-The state of the system is that all CORD services have been onboarded to XOS.
-You can see them in the `Service Graph` represented in the `Home` page.
-If you want to see more details about the services you navigate to `Core > Services`,
-or searching for `Service` in the top bar (you start searching just pressing `f`)
-
-A sample CORD subscriber has also been created. You can see the `Service Graph`
-for subscribers by selecting the `Service Graph` item in the left navigation.
-
-Here is a sample output:
-![subscriber-service-graph.png](subscriber-service-graph.png)
-_NOTE that the `Service Graph` will need to be detangled. You can organize the nodes by dragging them around._
-
-### Kibana log viewing GUI
-
-The Kibana web interface to the ElasticStack log aggregation system can be
-found at: `http://<target-server>:8080/kibana/`.
-
-On initial login, you will be asked to create an index for the `logstash-*`
-files - do this and then access the main logging interface under `Discover`.
-More information on using Kibana can be be found [in its
-documentation](https://www.elastic.co/guide/en/kibana/current/index.html).
-
-## Test Results
-
-After CORD-in-a-Box was set up, a couple of basic health
-tests were executed on the platform. The results of these tests can be
-found near the end of `~/install.out`.
-
-### test-vsg
-
-This tests the E2E connectivity of the POD by performing the following steps:
-
- * Sets up a sample CORD subscriber in XOS
- * Launches a vSG for that subscriber on the CORD POD
- * Creates a test client, corresponding to a device in the subscriber's
- household
- * Connects the test client to the vSG using a simulated OLT
- * Runs `ping` in the client to a public IP address in the Internet
-
-Success means that traffic is flowing between the subscriber household and the
-Internet via the vSG. If it succeeded, you should see some lines like these in
-the output:
-
-```
-TASK [test-vsg : Output from ping test] ****************************************
-Thursday 27 October 2016 15:29:17 +0000 (0:00:03.144) 0:19:21.336 ******
-ok: [10.100.198.201] => {
- "pingtest.stdout_lines": [
- "PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.",
- "64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=29.7 ms",
- "64 bytes from 8.8.8.8: icmp_seq=2 ttl=47 time=29.2 ms",
- "64 bytes from 8.8.8.8: icmp_seq=3 ttl=47 time=29.1 ms",
- "",
- "--- 8.8.8.8 ping statistics ---",
- "3 packets transmitted, 3 received, 0% packet loss, time 2003ms",
- "rtt min/avg/max/mdev = 29.176/29.367/29.711/0.243 ms"
- ]
-}
-```
-
-### test-exampleservice
-
-This test builds on `test-vsg` by loading the *exampleservice* described in the
-[Tutorial on Assembling and On-Boarding
-Services](https://wiki.opencord.org/display/CORD/Assembling+and+On-Boarding+Services%3A+A+Tutorial).
-The purpose of the *exampleservice* is to demonstrate how new subscriber-facing
-services can be easily deployed to a CORD POD. This test performs the following
-steps:
-
- * On-boards *exampleservice* into the CORD POD
- * Creates an *exampleservice* tenant, which causes a VM to be created and
- Apache to be loaded and configured inside
- * Runs a `curl` from the subscriber test client, through the vSG, to the
- Apache server.
-
-Success means that the Apache server launched by the *exampleservice* tenant is
-fully configured and is reachable from the subscriber client via the vSG. If
-it succeeded, you should see some lines like these in the output:
-
-```
-TASK [test-exampleservice : Output from curl test] *****************************
-Thursday 27 October 2016 15:34:40 +0000 (0:00:01.116) 0:24:44.732 ******
-ok: [10.100.198.201] => {
- "curltest.stdout_lines": [
- "ExampleService",
- " Service Message: \"hello\"",
- " Tenant Message: \"world\""
- ]
-}
-```
-
-## Development Workflow
-
-CORD-in-a-Box is a useful environment for integration testing and
-debugging. A typical scenario is to find a problem, and then rebuild and redeploy
-some XOS containers (e.g., a service synchronizer) to verify a fix. A
-workflow for quickly rebuilding and redeploying the XOS containers from source is:
-
- * Make changes in your source tree, under `~/cord/orchestration/xos*`
- * Login to the `corddev` VM and `cd /cord/build`
- * `./gradlew :platform-install:buildImages`
- * `./gradlew -PdeployConfig=config/cord_in_a_box.yml :platform-install:publish`
- * `./gradlew -PdeployConfig=config/cord_in_a_box.yml :orchestration:xos:publish`
-
-Additionally, if you made any changes to a profile (e.g., you added a new service), you'll need to re-sync the configuration from the build node to the head node. To do this run:
-
- * `./gradlew -PdeployConfig=config/cord_in_a_box.yml PIprepPlatform`
-
-Now the new XOS images should be published to the registry on `prod`. To bring them up, login to the `prod` VM and define these aliases:
-
-```
-CORD_PROFILE=$( cat /opt/cord_profile/profile_name )
-alias xos-pull="docker-compose -p $CORD_PROFILE -f /opt/cord_profile/docker-compose.yml pull"
-alias xos-up="docker-compose -p $CORD_PROFILE -f /opt/cord_profile/docker-compose.yml up -d"
-alias xos-teardown="pushd /opt/cord/build/platform-install; ansible-playbook -i inventory/head-localhost --extra-vars @/opt/cord/build/genconfig/config.yml teardown-playbook.yml; popd"
-alias compute-node-refresh="pushd /opt/cord/build/platform-install; ansible-playbook -i /etc/maas/ansible/pod-inventory --extra-vars=@/opt/cord/build/genconfig/config.yml compute-node-refresh-playbook.yml; popd"
-```
-
-To pull new images from the database and launch the containers, while retaining the existing XOS database, run:
-
-```
-$ xos-pull; xos-up
-```
-
-Alternatively, to remove the XOS database and reinitialize XOS from scratch, run:
-
-```
-$ xos-teardown; xos-pull; xos-launch; compute-node-refresh
-```
-
-
-## Troubleshooting
-
-If the CORD-in-a-Box build fails, you may try simply resuming the build at the
-place that failed. The easiest way is to do is to re-run the
-`cord-in-a-box.sh` script; this will start the build at the beginning and skip
-over the steps that have already been completed.
-
-If that doesn't work, the next thing to try is running `cord-in-a-box.sh -c` (specify
-the `-c` flag). This causes the script to clean up the previous installation
-and start from scratch.
-
-If running `cord-in-a-box.sh -c` repeatedly fails for you, please tell us
-about it on the [CORD Slack channel](https://slackin.opencord.org/)!
-
-## Congratulations
-
-If you got this far, you successfully built, deployed, and tested your first
-CORD POD.
-
-You are now ready to bring up a multi-node POD with a real switching fabric and
-multiple physical compute nodes. The process for doing so is
-described in the [Physical POD Guide](./quickstart_physical.md).
-
diff --git a/docs/quickstart_physical.md b/docs/quickstart_physical.md
deleted file mode 100644
index 7e24350..0000000
--- a/docs/quickstart_physical.md
+++ /dev/null
@@ -1,923 +0,0 @@
-# Physical POD: Quick Start Guide
-
-This guide is meant to enable the user to utilize the artifacts of this
-repository to to deploy CORD on to a physical hardware rack. The artifacts in
-this repository will deploy CORD against a standard physical rack wired
-according to the **best practices** as defined in this document.
-
-**NOTE:** *If you are new to CORD, you should start by bringing up a development
-POD on a single physical server to get familiar with the CORD deployment
-process. Instructions to do so can be found [here](./quickstart.md).*
-
-## Physical Configuration
-![Physical Hardware Connectivity](images/physical.png)
-
-As depicted in the diagram above the base model for the CORD POD deployment
-contains:
-- 4 OF switches comprising the leaf - spine fabric utilized for data traffic
-- 4 compute nodes with with 2 40G ports and 2 1G ports
-- 1 top of rack (TOR) switch utilized for management communications
-
-The best practices in terms of connecting the components of the CORD POD
-include:
-- Leaf nodes are connected to the spines nodes starting at the highest port
-number on the leaf.
-- For a given leaf node, its connection to the spine nodes terminate on the
-same port number on each spine.
-- Leaf *n* connections to spine nodes terminate at port *n* on each spine
-node.
-- Leaf spine switches are connected into the management TOR starting from the
-highest port number.
-- Compute nodes fabric interfaces (typically 40G or 10G) are named *eth0* and *eth1*.
-- Compute nodes POD management interfaces (typically 1G) are named *eth2* and *eth3*.
-- Compute node *n* is connected to the management TOR switch on port *n*,
-egressing from the compute node at *eth2*.
-- Compute node *n* is connected to its primary leaf, egressing at *eth0* and terminating on the leaf at port *n*.
-- Compute node *n* is connected to its secondary leaf, egressing at *eth1* and
-terminating on the leaf at port *n*.
-- *eth3* on the head node is the uplink from the POD to the Internet.
-
-The following assumptions are made about the phyical CORD POD being deployed:
-- The leaf - spine switches are Accton 6712s
-- The compute nodes are using 40G Intel NIC cards
-- The compute node that is to be designated the *head node* has
-**Ubuntu 14.04 LTS** installed. In addition, the user should have **password-less sudo permission**.
-
-**Prerequisite: Vagrant is installed and operational.**
-**Note:** *This quick start guide has only been tested against Vagrant and
-VirtualBox, specially on MacOS.*
-
-## Bootstrap the Head Node
-The head node is the key to the physical deployment of a CORD POD. The
-automated deployment of the physical POD is designed such that the head node is
-manually deployed, with the aid of automation tools, such as Ansible and from
-this head node the rest of the POD deployment is automated.
-
-The head node is deployed from a host outside the CORD POD (OtP).
-
-## Install Repo
-
-Make sure you have a bin directory in your home directory and that it is
-included in your path:
-
-```
-mkdir ~/bin
-PATH=~/bin:$PATH
-```
-
-(of course you can put repo wherever you want)
-
-Download the Repo tool and ensure that it is executable:
-
-```
-curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
-chmod a+x ~/bin/repo
-```
-
-## Clone the Repository
-To clone the repository, on your OtP build host issue the `git` command:
-```
-mkdir opencord && cd opencord
-repo init -u https://gerrit.opencord.org/manifest -b master
-```
-**NOTE:** _In the example above the OpenCORD version clones was the `master`
-branch of the source tree. If a different version is desired then `master`
-should be replaced with the name of the desired version, `cord-2.0` for
-example._
-
-Fetch the opencord source code
-```
-repo sync
-```
-
-### Complete
-When this is complete, a listing (`ls`) of this directory should yield output
-similar to:
-```
-ls -F
-build/ component/ incubator/ onos-apps/ orchestration/ test/
-```
-## Create the Development Machine
-
-The development environment is required for the tasks in this repository.
-This environment leverages [Vagrant](https://www.vagrantup.com/docs/getting-started/)
-to install the tools required to build and deploy the CORD software.
-
-To create the development machine the following Vagrant command can be
-used. This will create an Ubuntu 14.04 LTS based virtual machine and install
-some basic required packages, such as Docker, Docker Compose, and
-Oracle Java 8.
-```
-cd build
-vagrant up corddev
-```
-**NOTE:** *The VM will consume 2G RAM and about 12G disk space. Make sure it can obtain
-sufficient resources. It may takes several minutes for the first command
-`vagrant up corddev` to complete as it will include creating the VM as well as
-downloading and installing various software packages.*
-
-### Complete
-
-Once the Vagrant VM is created and provisioned, you will see output ending
-with:
-```
-==> corddev: PLAY RECAP *********************************************************************
-==> corddev: localhost : ok=29 changed=25 unreachable=0 failed=0
-```
-The important thing is that the *unreachable* and *failed* counts are both zero.
-
-## Connect to the Development Machine
-To connect to the development machine the following vagrant command can be used.
-```
-vagrant ssh corddev
-```
-
-Once connected to the Vagrant machine, you can find the deployment artifacts
-in the `/cord` directory on the VM.
-```
-cd /cord/build
-```
-
-### Gradle
-[Gradle](https://gradle.org/) is the build tool that is used to help
-orchestrate the build and deployment of a POD. A *launch* script is included
-in the Vagrant machine that will automatically download and install `gradle`.
-The script is called `gradlew` and the download / install will be invoked on
-the first use of this script; thus the first use may take a little longer
-than subsequent invocations and requires a connection to the internet.
-
-### Complete
-Once you have created and connected to the development environment this task is
-complete. The `cord` repository files can be found on the development machine
-under `/cord`. This directory is mounted from the host machine so changes
-made to files in this directory will be reflected on the host machine and
-vice-versa.
-
-
-## Fetch
-The fetching phase of the deployment pulls Docker images from the public
-repository down to the local machine as well as clones any `git` submodules
-that are part of the project. This phase can be initiated with the following
-command:
-```
-./gradlew fetch
-```
-**NOTE:** *The first time you run `./gradlew` it will download the `gradle`
-binary from the Internet and install it locally. This is a one time operation,
-but may be time consuming depending on the speed of your Internet connection.*
-
-### Complete
-Once the fetch command has successfully been run, this step is complete. After
-this command completes you should be able to see the Docker images that were
-downloaded using the `docker images` command on the development machine:
-```
-docker images
-REPOSITORY TAG IMAGE ID CREATED SIZE
-REPOSITORY TAG IMAGE ID CREATED SIZE
-xosproject/xos-postgres candidate c17f15922d35 20 hours ago 348MB
-xosproject/xos-postgres latest c17f15922d35 20 hours ago 348MB
-nginx candidate 958a7ae9e569 3 days ago 109MB
-nginx latest 958a7ae9e569 3 days ago 109MB
-xosproject/xos-base candidate 4a6b75a0f05a 6 days ago 932MB
-xosproject/xos-base latest 4a6b75a0f05a 6 days ago 932MB
-python 2.7-alpine 3dd614730c9c 7 days ago 72MB
-onosproject/onos <none> e41f6f8b2570 2 weeks ago 948MB
-gliderlabs/consul-server candidate 7ef15b0d1bdb 4 months ago 29.2MB
-gliderlabs/consul-server latest 7ef15b0d1bdb 4 months ago 29.2MB
-node <none> c09e81cac06c 4 months ago 650MB
-redis <none> 74b99a81add5 7 months ago 183MB
-consul <none> 62f109a3299c 11 months ago 41.1MB
-swarm <none> 47dc182ea74b 13 months ago 19.3MB
-nginx <none> 3c69047c6034 13 months ago 183MB
-gliderlabs/registrator candidate 3b59190c6c80 13 months ago 23.8MB
-gliderlabs/registrator latest 3b59190c6c80 13 months ago 23.8MB
-xosproject/vsg <none> dd026689aff3 13 months ago 336MB
-```
-
-## Build Images
-Bare metal provisioning leverages utilities built and packaged as Docker
-container images. These utilities are:
-
- - cord-maas-bootstrap - (directory: `bootstrap`) run at MAAS installation
- time to customize the MAAS instance via REST interfaces
- - cord-maas-automation - (directory: `automation`) daemon on the head node to
- automate PXE booted servers through the MAAS bare metal deployment work flow
- - cord-maas-switchq - (directory: `switchq`) daemon on the head
- node that watches for new switches being added to the POD and triggers
- provisioning when a switch is identified (via the OUI on MAC address).
- - cord-maas-provisioner - (directory: `provisioner`) daemon on the head node
- to managing the execution of ansible playbooks against switches and compute
- nodes as they are added to the POD.
- - cord-ip-allocator - (directory: `ip-allocator`) daemon on the head node used
- to allocate IP address for the fabric interfaces.
- - cord-dhcp-harvester - (directory: `harvester`) run on the head node to
- facilitate CORD / DHCP / DNS integration so that all hosts can be resolved
- via DNS
- - opencord/mavenrepo - custom CORD maven repository image to support
- ONOS application loading from a local repository
- - cord-test/nose - container from which cord tester test cases originate and
- validate traffic through the CORD infrastructure
- - cord-test/quagga - BGP virtual router to support uplink from CORD fabric
- network to Internet
- - cord-test/radius - Radius server to support cord-tester capability
-
-The images can be built by using the following command.
-```
-./gradlew buildImages
-```
-**NOTE:** *The first time you run `./gradlew` it will download the `gradle`
-binary from the Internet and install it locally. This is a one time operation,
-but may be time consuming depending on the speed of your Internet connection.*
-
-### Complete
-Once the `buildImages` command successfully runs this task is complete. The
-CORD artifacts have been built and the Docker images can be viewed by using the
-`docker images` command on the development machine.
-```
-docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}\t{{.ID}}'
-REPOSITORY TAG SIZE IMAGE ID
-xosproject/xos-ui candidate 943MB 23a2e5523279
-xosproject/exampleservice-synchronizer candidate 940MB b7cd75514f65
-xosproject/fabric-synchronizer candidate 940MB 2b183b0504fd
-xosproject/vtr-synchronizer candidate 940MB f2955d88bf63
-xosproject/vsg-synchronizer candidate 940MB 680b400ba627
-xosproject/vrouter-synchronizer candidate 940MB 332cb7817586
-xosproject/onos-synchronizer candidate 940MB 12fe520e29ae
-xosproject/openstack-synchronizer candidate 940MB 6a2e9b56ecba
-xosproject/vtn-synchronizer candidate 940MB 5edf77bcd615
-xosproject/gui-extension-rcord candidate 1.02GB 0cf6c1defba5
-xosproject/gui-extension-vtr candidate 1.02GB c78d602c5359
-xosproject/xos-corebuilder candidate 932MB c73eab2918c2
-xosproject/xos-gui-extension-builder candidate 1.02GB 78fe07e95ba9
-xosproject/xos-gui candidate 652MB e57d903b9766
-xosproject/xos-ws candidate 682MB 39cefcaa50bc
-xosproject/xos-synchronizer-base candidate 940MB a914ae0d1aba
-xosproject/xos-client candidate 940MB 667948589bc9
-xosproject/chameleon candidate 936MB cdb9d6996401
-xosproject/xos candidate 942MB 11f37fd19b0d
-xosproject/xos-postgres candidate 345MB f038a31b50be
-opencord/mavenrepo latest 271MB 2df1c4d790bf
-cord-maas-switchq candidate 14MB 8a44d3070ffd
-cord-maas-switchq build 252MB 77d7967c14e4
-cord-provisioner candidate 96.7MB de87aa48ffc4
-cord-provisioner build 250MB beff949ff60b
-cord-dhcp-harvester candidate 18.8MB 79780c133469
-cord-dhcp-harvester build 254MB c7950fc044dd
-config-generator candidate 12.2MB 37a51b0acdb2
-config-generator build 249MB 3d1f1faaf5e1
-cord-maas-automation candidate 13.6MB 2af5474082d4
-cord-maas-automation build 251MB 420d5328dc11
-cord-ip-allocator candidate 12.1MB 6d8aed37cb91
-cord-ip-allocator build 249MB 7235cbd3d771
-ubuntu 14.04.5 188MB 132b7427a3b4
-xosproject/xos-postgres latest 348MB c17f15922d35
-golang 1.7-alpine 241MB e40088237856
-nginx latest 109MB 958a7ae9e569
-xosproject/xos-base candidate 932MB 4a6b75a0f05a
-xosproject/xos-base latest 932MB 4a6b75a0f05a
-python 2.7-alpine 72MB 3dd614730c9c
-alpine 3.5 3.99MB 75b63e430bd1
-onosproject/onos <none> 948MB e41f6f8b2570
-node 7.9.0 665MB 90223b3d894e
-gliderlabs/consul-server candidate 29.2MB 7ef15b0d1bdb
-gliderlabs/consul-server latest 29.2MB 7ef15b0d1bdb
-node <none> 650MB c09e81cac06c
-redis <none> 183MB 74b99a81add5
-consul <none> 41.1MB 62f109a3299c
-swarm <none> 19.3MB 47dc182ea74b
-nginx candidate 183MB 3c69047c6034
-gliderlabs/registrator candidate 23.8MB 3b59190c6c80
-gliderlabs/registrator latest 23.8MB 3b59190c6c80
-xosproject/vsg <none> 336MB dd026689aff3
-```
-
-**NOTE:** *Not all the above Docker images were built by the `buildImages`
-command. Some of them, list golang, are used as a base for other Docker
-images.*
-
-## Deployment Configuration File
-The commands to deploy the POD can be customized via a *deployment configuration
-file*. The file is in [YAML](http://yaml.org/) format.
-
-To construct a configuration file for your physical POD, copy the
-sample deployment configuration found in `config/sample.yml` and modify the
-values to fit your physical deployment. Descriptions of the values can be
-found in the sample file.
-
-## Publish
-Publishing consists of *pushing* the build docker images to the Docker
-repository on the target head node. This step can take a while as it has to
-transfer all the image from the development machine to the target head node.
-This step is started with the following command:
-```
-./gradlew -PdeployConfig=config/podX.yml publish
-```
-
-### Complete
-
-Once the `publish` command successfully runs this task is complete. When this
-step is complete a Docker registry and Docker registry mirror will be running
-on the head node. This can be verified using the `docker ps` command on the
-head node:
-```
-docker ps --format 'table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}'
-CONTAINER ID IMAGE COMMAND CREATED AT
-c8dd48fc9d18 registry:2.4.0 "/bin/registry serve " 2016-12-02 11:49:12 -0800 PST
-e983d2e43760 registry:2.4.0 "/bin/registry serve " 2016-12-02 11:49:12 -0800 PST
-```
-
-We can also query the docker registry on the head node. We should be able to
-observe a list of docker images.
-
-_Note: the example below uses the commands `curl` and `jq`
-to retrieve data and pretty print JSON. If you system doesn't have `curl` or
-`jq` installed it can be installed using `sudo apt-get install -y curl jq`._
-
-```
-curl -sS http://head-node-ip-address:5000/v2/_catalog | jq .
-{
- "repositories": [
- "config-generator",
- "consul",
- "cord-dhcp-harvester",
- "cord-ip-allocator",
- "cord-maas-automation",
- "cord-maas-switchq",
- "cord-provisioner",
- "gliderlabs/consul-server",
- "gliderlabs/registrator",
- "mavenrepo",
- "nginx",
- "node",
- "onosproject/onos",
- "redis",
- "swarm",
- "xosproject/chameleon",
- "xosproject/exampleservice-synchronizer",
- "xosproject/fabric-synchronizer",
- "xosproject/gui-extension-rcord",
- "xosproject/gui-extension-vtr",
- "xosproject/onos-synchronizer",
- "xosproject/openstack-synchronizer",
- "xosproject/vrouter-synchronizer",
- "xosproject/vsg",
- "xosproject/vsg-synchronizer",
- "xosproject/vtn-synchronizer",
- "xosproject/vtr-synchronizer",
- "xosproject/xos",
- "xosproject/xos-client",
- "xosproject/xos-corebuilder",
- "xosproject/xos-gui",
- "xosproject/xos-postgres",
- "xosproject/xos-synchronizer-base",
- "xosproject/xos-ui",
- "xosproject/xos-ws"
- ]
-}
-}
-```
-
-## Deploy Bare Metal Provisioning Capabilities
-There are three parts to deploying bare metal: deploying the head node PXE
-server (`MAAS`), PXE booting a compute node, and post deployment provisioning
-of the compute node. These tasks are accomplished utilizing additionally
-Vagrant machines as well as executing `gradle` tasks in the Vagrant
-development machine. This task also provisions XOS. XOS provides service
-provisioning and orchestration for the CORD POD.
-
-### Deploy MAAS and XOS
-Canonical MAAS provides the PXE and other bare metal provisioning services for
-CORD and will be deployed on the head node.
-```
-./gradlew -PdeployConfig=config/podX.yml deploy
-```
-
-This task can take some time so be patient. It should complete without errors,
-so if an error is encountered, something has gone Horribly Wrong(tm). See the
-[Getting Help](#getting-help) section.
-
-### Complete
-
-This step is complete when the command successfully runs. The Web UI for MAAS
-can be viewed by browsing to the target machine using a URL of the form
-`http://head-node-ip-address/MAAS`. To login to the web page, use
-`cord` for username. If you have set a password in the deployment configuration
-password use that, else the password used can be found in your build directory
-under `<base>/build/maas/passwords/maas_user.txt`.
-
-After the `deployBase` command install `MAAS`, it initiates the download of
-an Ubuntu 14.04 boot image that will be used to boot the other POD servers.
-This download can take some time and the process cannot continue until the
-download is complete. The status of the download can be verified through
-the UI by visiting the URL `http://head-node-ip-address/MAAS/images/`,
-or via the command line from head node via the following commands:
-```
-APIKEY=$(sudo maas-region-admin apikey --user=cord)
-maas login cord http://localhost/MAAS/api/1.0 "$APIKEY"
-maas cord boot-resources read | jq 'map(select(.type != "Synced"))'
-```
-
-It the output of of the above commands is not a empty list, `[]`, then the
-images have not yet been completely downloaded. depending on your network speed
-this could take several minutes. Please wait and then attempt the last command
-again until the returned list is empty, `[]`. When the list is empty you can
-proceed.
-
-Browse around the UI and get familiar with MAAS via documentation at
-`http://maas.io`
-
-The deployment of XOS includes a deployment of Open Stack.
-
-## Booting Compute Nodes
-
-### Network configuration
-The CORD POD uses two core network interfaces, `fabric` and `mgmtbr`. The
-`fabric` interface will be used to bond all interfaces meant to be used for CORD
-data traffic and the `mgmtbr` will be used to bridge all interfaces used for POD
-management (signaling) traffic.
-
-An additional interface of import on the head node is the external interface, or
-the interface through which the management net accesses upstream servers; such
-as the Internet.
-
-How physical interfaces are identified and mapped to either the `fabric` or
-`mgmtbr` interface is a combination of their name, NIC driver, and/or bus type.
-
-By default any interface that has a module or kernel driver of `tun`, `bridge`,
-`bonding`, or `veth` will be ignored when selecting devices for the `fabric` and
-`mgmtbr` interfaces. As will any interface that is not associated with a bus
-type or has a bus type of `N/A` or `tap`. For your specific deployment you can
-verify the interface information using the `ethtool -i <name>` command on the
-linux prompt.
-
-All other interfaces that are not ignored will be considered for selection to
-either the `fabric` or `mbmtbr` interface. By default, any interface that has a
-module or kernel driver of `i40e` or `mlx4_en` will be selected to the `fabric`
-interface and all others will be selected to the `mgmtbr` interface.
-
-As the `fabric` interface is a `bond` the first interface, sorted alpha
-numerically by name, will be used as the primary interface.
-
-For the management network an interface bond, `mgmtbond`, is created to provide
-redundant network for the physical interfaces. The bridge, `mgmtbr`, associates
-this bond interface and various other virtual interfaces together to enable
-management communication between compute nodes, containers, and virtual machines
-that make up the management software for CORD.
-
-#### Customizing Network Configuration
-The network configuration can be customized to your deployment using a set of
-variables that can be set in your deployment configuration file, e.g.
-`podX.yml`. There is a set of include, exclude, and ignore variables that
-operation on the interface name, module type, and bus type. By setting values on
-these variables it is fairly easy to customize the network settings.
-
-The options are processed as following:
-
-1. If a given interface matches an ignore option, it is not available to be selected into either the `fabric` or `mgmtbr` interface and will not be modified in the `/etc/network/interface`.
-1. If no include criteria are specified and the given interfaces matches then exclude criteria then the interface will be set as `manual` configuration in the `/etc/network/interface` file and will not be `auto` activated
-1. If no include criteria are specified and the given interface does _NOT_ match the exclude criteria then this interface will be included in either the `frabric` or `mgmtbr` interface
-1. If include criteria are specified and the given interface does not match the criteria then the interface will be ignored and its configuration will _NOT_ be modified
-1. If include criteria are specified and the given interface matches the criteria then if the given interface also matches the exclude criteria then this interface will be set as `manual` configuration in the `/etc/network/interface` file and will not be `auto` activated
-1. If include criteria are specified and the given interface matches the criteria and if it does _NOT_ match the exclude criteria then this interface will be included in either the `frabric` or `mgmtbr` interface
-
-By default, the only criteria that are specified is the _fabric include module
-types_ and they are set to `i40e,mlx4_en` (_NOTE: the list is now comma
-separated and not vertical bar (`|`) separated._)
-
-If the _fabric include module types_ is specified and the _management exclude
-module types_ are not specified, then by default the _fabric include module
-types_ are used as the _management exclude module types_. This ensures that by
-default the `fabric` and the `mgmtbr` do not intersect on interface module
-types.
-
-If an external interface is specified in the deployment configuration, this
-interface will be added to the _farbric_ and _management_ _ignore names_ list.
-
-Each of the criteria is specified as a comma separated list of regular
-expressions. Default
-
-To set the variables you can use the `seedServer.extraVars` section in the
-deployment config file as follows:
-
-```
-seedServer:
- extraVars:
- - 'fabric_include_names=<name1>,<name2>'
- - 'fabric_include_module_types=<mod1>,<mod2>'
- - 'fabric_include_bus_types=<bus1>,<bus2>'
- - 'fabric_exclude_names=<name1>,<name2>'
- - 'fabric_exclude_module_types=<mod1>,<mod2>'
- - 'fabric_exclude_bus_types=<bus1>,<bus2>'
- - 'fabric_ignore_names=<name1>,<name2>'
- - 'fabric_ignore_module_types=<mod1>,<mod2>'
- - 'fabric_ignore_bus_types=<bus1>,<bus2>'
- - 'management_include_names=<name1>,<name2>'
- - 'management_include_module_types=<mod1>,<mod2>'
- - 'management_include_bus_types=<bus1>,<bus2>'
- - 'management_exclude_names=<name1>,<name2>'
- - 'management_exclude_module_types=<mod1>,<mod2>'
- - 'management_exclude_bus_types=<bus1>,<bus2>'
- - 'management_ignore_names=<name1>,<name2>'
- - 'management_ignore_module_types=<mod1>,<mod2>'
- - 'management_ignore_bus_types=<bus1>,<bus2>'
-```
-
-The Ansible scripts configure MAAS to support DHCP/DNS/PXE on the eth2 and
-mgmtbr interfaces.
-
-Once it has been verified that the ubuntu boot image has been downloaded the
-compute nodes may be PXE booted.
-
-**Note:** *In order to ensure that the compute node PXE boot the bios settings
-may have to be adjusted. Additionally, the remote power management on the
-compute nodes must be enabled.*
-
-The compute node will boot, register with MAAS, and then be shut off. After this
-is complete an entry for the node will be in the MAAS UI at
-`http://head-node-ip-address/MAAS/#/nodes`. It will be given a random
-hostname, in the Canonical way, of a adjective and an noun, such as
-`popular-feast.cord.lab`. *The name will be different for every deployment.* The
-new node will be in the `New` state.
-
-As the machines boot they should be automatically transitioned from `New`
-through the states of `Commissioning` and `Acquired` to `Deployed`.
-
-### Post Deployment Provisioning of the Compute Node
-Once the node is in the `Deployed` state, it will be provisioned for use in a
-CORD POD by the execution of an `Ansible` playbook.
-
-### Complete
-Once the compute node is in the `Deployed` state and post deployment
-provisioning on the compute node is complete, this task is complete.
-
-Logs of the post deployment provisioning of the compute nodes can be found
-in `/etc/maas/ansible/logs` on the head node.
-
-Additionally, the post deployment provisioning of the compute nodes can be
-queried using the command `cord prov list`
-```
-cord prov list
-ID NAME MAC IP STATUS MESSAGE
-node-c22534a2-bd0f-11e6-a36d-2c600ce3c239 steel-ghost.cord.lab 2c:60:0c:cb:00:3c 10.6.0.107 Complete
-node-c238ea9c-bd0f-11e6-8206-2c600ce3c239 feline-shirt.cord.lab 2c:60:0c:e3:c4:2e 10.6.0.108 Complete
-node-c25713c8-bd0f-11e6-96dd-2c600ce3c239 yellow-plot.cord.lab 2c:60:0c:cb:00:f0 10.6.0.109 Complete
-```
-
-The following status values are defined for the provisioning status:
- - `Pending`- the request has been accepted by the provisioner but not yet
- started
- - `Processing` - the request is being processed and the node is being
- provisioned
- - `Complete` - the provisioning has been completed successfully
- - `Error` - the provisioning has failed and the `message` will be
- populated with the exit message from provisioning.
-
-Please refer to [Re-provision Compute Nodes and Switches
-](#re-provision-compute-nodes-and-switches)
-if you want to restart this process or re-provision a initialized compute node.
-
-## Booting OpenFlow switches
-Once the compute nodes have begun their boot process you may also boot the
-switches that support the leaf spine fabric. These switches should ONIE install
-boot and download their boot image from MAAS.
-
-### Complete
-This step is complete when the command completes successfully. You can verify
-the provisioning of the switches by querying the provisioning service
-using `cord prov list` which will show the status of the switches as well as
-the compute nodes. Switches can be easily identified as their ID will be the
-MAC address of the switch management interface.
-```
-cord prov list
-ID NAME MAC IP STATUS MESSAGE
-cc:37:ab:7c:b7:4c spine-1 cc:37:ab:7c:b7:4c 10.6.0.23 Complete
-cc:37:ab:7c:ba:58 leaf-2 cc:37:ab:7c:ba:58 10.6.0.20 Complete
-cc:37:ab:7c:bd:e6 leaf-1 cc:37:ab:7c:bd:e6 10.6.0.52 Complete
-cc:37:ab:7c:bf:6c spine-2 cc:37:ab:7c:bf:6c 10.6.0.22 Complete
-node-c22534a2-bd0f-11e6-a36d-2c600ce3c239 steel-ghost.cord.lab 2c:60:0c:cb:00:3c 10.6.0.107 Complete
-node-c238ea9c-bd0f-11e6-8206-2c600ce3c239 feline-shirt.cord.lab 2c:60:0c:e3:c4:2e 10.6.0.108 Complete
-node-c25713c8-bd0f-11e6-96dd-2c600ce3c239 yellow-plot.cord.lab 2c:60:0c:cb:00:f0 10.6.0.109 Complete
-```
-The following status values are defined for the provisioning status:
- - `Pending`- the request has been accepted by the provisioner but not yet
- started
- - `Processing` - the request is being processed and the node is being
- provisioned
- - `Complete` - the provisioning has been completed successfully
- - `Error` - the provisioning has failed and the `message` will be
- populated with the exit message from provisioning.
-
-Please refer to [Re-provision Compute Nodes and Switches
-](#re-provision-compute-nodes-and-switches)
-if you want to restart this process or re-provision a initialized switch.
-
-## Post Deployment Configuration of XOS / ONOS VTN app
-
-The compute node provisioning process described above (under [Booting Compute Nodes](#booting-compute-nodes)) will install the servers as OpenStack compute
-nodes. You should be able to see them on the CORD head node by running the
-following commands:
-```
-source ~/admin-openrc.sh
-nova hypervisor-list
-```
-
-You will see output like the following (showing each of the nodes you have
-provisioned):
-```
-+----+--------------------------+
-| ID | Hypervisor hostname |
-+----+--------------------------+
-| 1 | sturdy-baseball.cord.lab |
-+----+--------------------------+
-```
-
-This step performs a small amount of manual configuration to tell VTN how to
-route external traffic, and verifies that the new nodes were added to the ONOS VTN app by XOS.
-
-### Fabric Gateway Configuration
-First, login to the CORD head node and change to the
-`/opt/cord_profile` directory.
-
-To configure the fabric gateway, you will need to edit the file
-`cord-services.yaml`. You will see a section that looks like this:
-
-```
- addresses_vsg:
- type: tosca.nodes.AddressPool
- properties:
- addresses: 10.6.1.128/26
- gateway_ip: 10.6.1.129
- gateway_mac: 02:42:0a:06:01:01
-```
-
-Edit this section so that it reflects the fabric's address block assigned to the
-vSGs, as well as the gateway IP and MAC address that the vSGs should use to
-reach the Internet.
-
-Once the `cord-services.yaml` TOSCA file has been edited as
-described above, push it to XOS by running the following:
-
-```
-cd /opt/cord_profile
-docker-compose -p rcord exec xos_ui python /opt/xos/tosca/run.py xosadmin@opencord.org /opt/cord_profile/cord-services.yaml
-```
-
-### Complete
-
-This step is complete once you see the correct information in the VTN app
-configuration in XOS and ONOS.
-
-To check the VTN configuration maintained by XOS:
- - Go to the "ONOS apps" page in the CORD GUI:
- - URL: `http://<head-node>/xos#/onos/onosapps/`
- - Username: `xosadmin@opencord.org`
- - Password: (contents of `/opt/cord/build/platform-install/credentials/xosadmin@opencord.org`)
- - Select *VTN_ONOS_app* in the table
- - Verify that the *Backend status* is *1 - OK*
-
-To check that the network configuration has been successfully pushed
-to the ONOS VTN app and processed by it:
-
- - Log into ONOS from the head node
- - Command: `ssh -p 8102 onos@onos-cord`
- - Password: `rocks`
- - Run the `cordvtn-nodes` command
- - Verify that the information for all nodes is correct
- - Verify that the initialization status of all nodes is *COMPLETE* This will look like the following:
-```
-onos> cordvtn-nodes
-Hostname Management IP Data IP Data Iface Br-int State
-sturdy-baseball 10.1.0.14/24 10.6.1.2/24 fabric of:0000525400d7cf3c COMPLETE
-Total 1 nodes
-```
- - Run the `netcfg` command. Verify that the updated gateway information is present under `publicGateways`:
-```
- "publicGateways" : [ {
- "gatewayIp" : "10.6.1.193",
- "gatewayMac" : "02:42:0a:06:01:01"
- }, {
- "gatewayIp" : "10.6.1.129",
- "gatewayMac" : "02:42:0a:06:01:01"
- } ],
-
-```
-
-### Troubleshoot
-If the compute node is not initialized properly (i.e. not in the COMPLETE state):
-On the compute node, run
-```
-sudo ovs-vsctl del-br br-int
-```
-On the head node, run
-```
-ssh onos@onos-cord -p 8102
-```
-(password is 'rocks')
-and then in the ONOS CLI, run
-```
-cordvtn-node-init <compute-node-name>
-```
-(name is something like "sturdy-baseball")
-
-## Post Deployment Configuration of the ONOS Fabric
-
-### Manully Configure Routes on the Compute Node `br-int` Interface
-The routes on the compute node `br-int` interface need to be manually configured now.
-Run the following command on compute-1 and compute-2 (nodes in 10.6.1.0/24)
-```
-sudo ip route add 10.6.2.0/24 via 10.6.1.254
-```
-Run the following command on compute-3 and compute-4 (nodes in 10.6.2.0/24)
-```
-sudo ip route add 10.6.1.0/24 via 10.6.2.254
-```
-
-### Modify and Apply Fabric Configuration
-Configuring the switching fabric for use with CORD is documented in the
-[Fabric Configuration Guide](https://wiki.opencord.org/display/CORD/Fabric+Configuration+Guide) on the OpenCORD wiki.
-
-On the head node is a service that will generate an ONOS network configuration
-for the leaf/spine network fabric. This configuration is generating by
-querying ONOS for the known switches and compute nodes and producing a JSON
-structure that can be `POST`ed to ONOS to implement the fabric.
-
-The configuration generator can be invoked using the `cord generate` command.
-The configuration will be generated to `stdout`.
-
-Before generating a configuration you need to make sure that the instance of
-ONOS controlling the fabric doesn't contain any stale data and has has processed
-a packet from each of the switches and computes nodes. ONOS needs to process a
-packet because it does not have a mechanism to discover the network, thus to be
-aware of a device on the network ONOS needs to first receive a packet from it.
-
-To remove stale data from ONOS, the ONOS CLI `wipe-out` command can be used:
-```
-ssh -p 8101 onos@onos-fabric wipe-out -r -j please
-Warning: Permanently added '[onos-fabric]:8101,[10.6.0.1]:8101' (RSA) to the list of known hosts.
-Password authentication
-Password: # password is 'rocks'
-Wiping intents
-Wiping hosts
-Wiping Flows
-Wiping groups
-Wiping devices
-Wiping links
-Wiping UI layouts
-Wiping regions
-```
-
-To ensure ONOS is aware of all the switches and the compute nodes, you must
-have each switch "connect" to the controller and have each compute node ping
-over its fabric interface to the controller.
-
-If the switches are not already connected the following commands will initiate
-a connection.
-
-```shell
-for s in $(cord switch list | grep -v IP | awk '{print $3}'); do
-ssh -qftn root@$s ./connect -bg 2>&1 > $s.log
-done
-```
-
-You can verify ONOS has recognized the devices using the following command:
-
-```shell
-ssh -p 8101 onos@onos-fabric devices
-Warning: Permanently added '[onos-fabric]:8101,[10.6.0.1]:8101' (RSA) to the list of known hosts.
-Password authentication
-Password: # password is 'rocks'
-id=of:0000cc37ab7cb74c, available=true, role=MASTER, type=SWITCH, mfr=Broadcom Corp., hw=OF-DPA 2.0, sw=OF-DPA 2.0, serial=, driver=ofdpa, channelId=10.6.0.23:58739, managementAddress=10.6.0.23, protocol=OF_13
-id=of:0000cc37ab7cba58, available=true, role=MASTER, type=SWITCH, mfr=Broadcom Corp., hw=OF-DPA 2.0, sw=OF-DPA 2.0, serial=, driver=ofdpa, channelId=10.6.0.20:33326, managementAddress=10.6.0.20, protocol=OF_13
-id=of:0000cc37ab7cbde6, available=true, role=MASTER, type=SWITCH, mfr=Broadcom Corp., hw=OF-DPA 2.0, sw=OF-DPA 2.0, serial=, driver=ofdpa, channelId=10.6.0.52:37009, managementAddress=10.6.0.52, protocol=OF_13
-id=of:0000cc37ab7cbf6c, available=true, role=MASTER, type=SWITCH, mfr=Broadcom Corp., hw=OF-DPA 2.0, sw=OF-DPA 2.0, serial=, driver=ofdpa, channelId=10.6.0.22:44136, managementAddress=10.6.0.22, protocol=OF_13
-```
-
-To make sure that ONOS is aware of the compute nodes the follow command will
-a ping over the fabric interface on each of the compute nodes.
-
-```shell
-for h in localhost $(cord prov list | grep "^node" | awk '{print $4}'); do
-ssh -qftn $h ping -c 1 -I fabric 8.8.8.8;
-done
-```
-
-You can verify ONOS has recognized the devices using the following command:
-```shell
-ssh -p 8101 onos@onos-fabric hosts
-Warning: Permanently added '[onos-fabric]:8101,[10.6.0.1]:8101' (RSA) to the list of known hosts.
-Password authentication
-Password: # password is 'rocks'
-id=00:16:3E:DF:89:0E/None, mac=00:16:3E:DF:89:0E, location=of:0000cc37ab7cba58/3, vlan=None, ip(s)=[10.6.0.54], configured=false
-id=3C:FD:FE:9E:94:28/None, mac=3C:FD:FE:9E:94:28, location=of:0000cc37ab7cba58/4, vlan=None, ip(s)=[10.6.0.53], configured=false
-id=3C:FD:FE:9E:94:30/None, mac=3C:FD:FE:9E:94:30, location=of:0000cc37ab7cbde6/1, vlan=None, ip(s)=[10.6.1.1], configured=false
-id=3C:FD:FE:9E:98:69/None, mac=3C:FD:FE:9E:98:69, location=of:0000cc37ab7cbde6/2, vlan=None, ip(s)=[10.6.0.5], configured=false
-```
-
-To modify the fabric configuration for your environment, on the head node,
-generate a new network configuration using the following commands:
-
-```
-cd /opt/cord_profile
-cp fabric-network-cfg.json{,.$(date +%Y%m%d-%H%M%S)}
-cord generate > fabric-network-cfg.json
-```
-
-Once these steps are done, delete the old configuration,
-load the new configuration into XOS, and restart the apps in ONOS:
-```
-sudo pip install httpie
-http -a onos:rocks DELETE http://onos-fabric:8181/onos/v1/network/configuration/
-docker-compose -p rcord exec xos_ui python /opt/xos/tosca/run.py xosadmin@opencord.org /opt/cord_profile/fabric-service.yaml
-http -a onos:rocks POST http://onos-fabric:8181/onos/v1/applications/org.onosproject.vrouter/active
-http -a onos:rocks POST http://onos-fabric:8181/onos/v1/applications/org.onosproject.segmentrouting/active
-```
-
-To verify that XOS has pushed the configuration to ONOS, log into ONOS in the onos-fabric VM and run `netcfg`:
-```
-$ ssh -p 8101 onos@onos-fabric netcfg
-Password authentication
-Password: # password is 'rocks'
-{
- "hosts" : {
- "00:00:00:00:00:04/None" : {
- "basic" : {
- "ips" : [ "10.6.2.2" ],
- "location" : "of:0000000000000002/4"
- }
- },
- "00:00:00:00:00:03/None" : {
- "basic" : {
- "ips" : [ "10.6.2.1" ],
- "location" : "of:0000000000000002/3"
- }
- },
-... etc.
-```
-
-### Update physical host locations in XOS
-
-To correctly configure the fabric when VMs and containers are created on a
-physical host, XOS needs to associate the `location` tag of each physical host
-(from the fabric configuration) with its Node object in XOS. This step needs to
-be done after a new physical compute node is provisioned on the POD.
-
-To update the node location in XOS, perform the following steps:
-
- * Login to the XOS admin GUI at `http://<head-node-ip>/admin/`
- * Navigate to `/admin/core/node/`
- * Select the node to change
- * Select the *Tags* tab for the node
- * Click on *Add another tag*
- * Fill in the following information:
- * Service: *ONOS_Fabric*
- * Name: *location*
- * Value: *(location of the node in the fabric configuration created above)*
- * Click _Save_ button at bottom.
-
-### Connect Switches to the controller
-We need to manually connect the switches to ONOS after the network config is applied.
-This can be done by running following ansible script on the head node.
-```
-ansible-playbook /etc/maas/ansible/connect-switch.yml
-```
-This ansible script will automatically locate all switches in DHCP harvest and connect them to the controller.
-
-### Complete
-
-This step is complete when each compute node can ping the fabric IP address of all the other nodes.
-
-
-## Getting Help
-
-If it seems that something has gone wrong with your setup, there are a number of ways that you
-can get help -- in the documentation on the [OpenCORD wiki](https://wiki.opencord.org), on the
-[OpenCORD Slack channel](https://opencord.slack.com) (get an invitation [here](https://slackin.opencord.org)),
-or on the [CORD-discuss mailing list](https://groups.google.com/a/opencord.org/forum/#!forum/cord-discuss).
-
-See the [How to Contribute to CORD wiki page](https://wiki.opencord.org/display/CORD/How+to+Contribute+to+CORD#HowtoContributetoCORD-AskingQuestions)
-for more information.
-
-## Re-provision Compute Nodes and Switches
-If you would like to re-provision a switch or a compute node the `cord prov delete`
-command can be used. This command takes one or more IDs as parameters and will
-delete the provisioning records for these devices. This will cause the provisioner
-to re-provision them.
-
-You can also use the argument `--all`, which will delete all known provisioning
-records.
-
-```
-cord prov delete node-c22534a2-bd0f-11e6-a36d-2c600ce3c239
-node-c22534a2-bd0f-11e6-a36d-2c600ce3c239 DELETED
-```
-
-```
-cord prov delete --all
-cc:37:ab:7c:b7:4c DELETED
-cc:37:ab:7c:ba:58 DELETED
-cc:37:ab:7c:bd:e6 DELETED
-cc:37:ab:7c:bf:6c DELETED
-node-c22534a2-bd0f-11e6-a36d-2c600ce3c239 DELETED
-node-c238ea9c-bd0f-11e6-8206-2c600ce3c239 DELETED
-node-c25713c8-bd0f-11e6-96dd-2c600ce3c239 DELETED
-```
diff --git a/docs/quickstart_vm.md b/docs/quickstart_vm.md
deleted file mode 100644
index e60ad7c..0000000
--- a/docs/quickstart_vm.md
+++ /dev/null
@@ -1,482 +0,0 @@
-# CORD Quick Start Guide using Virtual Machine Nodes
-
-[*This tutortial is obsolete. Instead see information about CORD-in-a-Box,
-as outlined in the [Quick Start Tutorial](quickstart.md).*]
-
-This guide is meant to enable the user to quickly exercise the capabilities
-provided by the artifacts of this repository. There are three high level tasks
-that can be exercised:
- - Create development environment
- - Build and Publish Docker images that support bare metal provisioning
- - Deploy the bare metal provisioning capabilities to a virtual machine (head
- node) and PXE boot a compute node
-
-**Prerequisite: Vagrant is installed and operationally.**
-**Note:** *This quick start guide has only been tested against Vagrant and
-VirtualBox, specially on MacOS.*
-
-## Create Development Environment
-The development environment is required for the other tasks in this repository.
-The other tasks could technically be done outside this Vagrant based development
-environment, but it would be left to the user to ensure connectivity and
-required tools are installed. It is far easier to leverage the Vagrant based
-environment.
-
-## Install Repo
-
-Make sure you have a bin directory in your home directory and that it is included in your path:
-
-```
-mkdir ~/bin
-PATH=~/bin:$PATH
-```
-
-(of course you can put repo wherever you want)
-
-Download the Repo tool and ensure that it is executable:
-
-```
-curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
-chmod a+x ~/bin/repo
-```
-
-## Clone the Repository
-To clone the repository, on your OtP build host issue the `git` command:
-```
-mkdir opencord && cd opencord
-repo init -u https://gerrit.opencord.org/manifest -b master -g build,onos
-```
-
-Fetch the opencord source code
-```
-repo sync
-```
-
-### Complete
-When this is complete, a listing (`ls`) of this directory should yield output
-similar to:
-```
-ls
-build onos-apps
-```
-
-### Create Development Machine and Head Node Production Server
-To create the development machine the following single Vagrant command can be
-used. This will create an Ubuntu 14.04 LTS based virtual machine and install
-some basic required packages, such as Docker, Docker Compose, and
-Oracle Java 8.
-```
-cd build
-vagrant up corddev
-```
-**NOTE:** *It may have several minutes for the first command `vagrant up
-corddev` to complete as it will include creating the VM as well as downloading
-and installing various software packages.*
-
-To create the VM that represents the POD head node the following vagrant
-command can be used. This will create a basic Ubuntu 14.04 LTS server with
-no additional software installed.
-```
-vagrant up prod
-```
-
-### Connect to the Development Machine
-To connect to the development machine the following vagrant command can be used.
-```
-vagrant ssh corddev
-```
-
-Once connected to the Vagrant machine, you can find the deployment artifacts
-in the `/cord` directory on the VM.
-```
-cd /cord
-```
-
-### Gradle
-[Gradle](https://gradle.org/) is the build tool that is used to help
-orchestrate the build and deployment of a POD. A *launch* script is included
-in the vagrant machine that will automatically download and install `gradle`.
-The script is called `gradlew` and the download / install will be invoked on
-the first use of this script; thus the first use may take a little longer
-than subsequent invocations and requires a connection to the internet.
-
-### Complete
-Once you have created and connected to the development environment this task is
-complete. The `cord` repository files can be found on the development machine
-under `/cord`. This directory is mounted from the host machine so changes
-made to files in this directory will be reflected on the host machine and
-vice-versa.
-
-## Fetch
-The fetching phase of the deployment pulls Docker images from the public
-repository down to the local machine as well as clones any `git` submodules
-that are part of the project. This phase can be initiated with the following
-command:
-```
-./gradlew fetch
-```
-
-### complete
-Once the fetch command has successfully been run, this step is complete. After
-this command completes you should be able to see the Docker images that were
-downloaded using the `docker images` command on the development machine:
-```
-docker images
-REPOSITORY TAG IMAGE ID CREATED SIZE
-python 2.7-alpine 836fa7aed31d 5 days ago 56.45 MB
-consul <none> 62f109a3299c 2 weeks ago 41.05 MB
-registry 2.4.0 8b162eee2794 9 weeks ago 171.1 MB
-abh1nav/dockerui latest 6e4d05915b2a 19 months ago 469.5 MB
-```
-
-## Build Images
-Bare metal provisioning leverages utilities built and packaged as Docker
-container images. These utilities are:
-
- - cord-maas-bootstrap - (directory: `bootstrap`) run at MAAS installation
- time to customize the MAAS instance via REST interfaces
- - cord-maas-automation - (directory: `automation`) daemon on the head node to
- automate PXE booted servers through the MAAS bare metal deployment work flow
- - cord-maas-switchq - (directory: `switchq`) daemon on the head
- node that watches for new switches being added to the POD and triggers
- provisioning when a switch is identified (via the OUI on MAC address).
- - cord-maas-provisioner - (directory: `provisioner`) daemon on the head node
- to managing the execution of ansible playbooks against switches and compute
- nodes as they are added to the POD.
- - cord-ip-allocator - (directr: `ip-allocator`) daemon on the head node used
- to allocate IP address for the fabric interfaces.
- - cord-dhcp-harvester - (directory: `harvester`) run on the head node to
- facilitate CORD / DHCP / DNS integration so that all hosts can be resolved
- via DNS
- - opencord/mavenrepo
- - cord-test/nose
- - cord-test/quagga
- - cord-test/radius
- - onosproject/onos
-
-The images can be built by using the following command. This will build all
-the images.
-```
-./gradlew buildImages
-```
-
-**NOTE:** *The first time you run `./gradlew` it will download from the Internet
-the `gradle` binary and install it locally. This is a one time operation.*
-
-### Complete
-Once the `buildImages` command successfully runs this task is complete. The
-CORD artifacts have been built and the Docker images can be viewed by using the
-`docker images` command on the development machine.
-```
-docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}\t{{.ID}}'
-REPOSITORY TAG SIZE IMAGE ID
-cord-maas-switchq latest 781 MB 4736cc8c4f71
-cord-provisioner latest 814.6 MB 50ab479e4b52
-cord-dhcp-harvester latest 60.67 MB 88f900d74f19
-cord-maas-bootstrap latest 367.5 MB 19bde768c786
-cord-maas-automation latest 366.8 MB 1e2ab7242060
-cord-ip-allocator latest 324.3 MB f8f2849107f6
-opencord/mavenrepo latest 434.2 MB 9d1ad7214262
-cord-test/nose latest 1.028 GB 67b996f2ad19
-cord-test/quagga latest 454.4 MB b46f7dd20bdf
-cord-test/radius latest 312.1 MB e09d78aef295
-onosproject/onos <none> 825.6 MB 309088c647cf
-python 2.7-alpine 56.45 MB 836fa7aed31d
-golang 1.6-alpine 282.9 MB d688f409d292
-golang alpine 282.9 MB d688f409d292
-ubuntu 14.04 196.6 MB 38c759202e30
-consul <none> 41.05 MB 62f109a3299c
-nginx latest 182.7 MB 0d409d33b27e
-registry 2.4.0 171.1 MB 8b162eee2794
-swarm <none> 19.32 MB 47dc182ea74b
-nginx <none> 182.7 MB 3c69047c6034
-hbouvier/docker-radius latest 280.9 MB 5d5d3c0a91b0
-abh1nav/dockerui latest 469.5 MB 6e4d05915b2a
-```
-**NOTE:** *Not all the above Docker images were built by the `buildImages`
-command. Some of them, list golang, are used as a base for other Docker
-images; and some, like `abh1nav/dockerui` were downloaded when the development
-machine was created with `vagrant up`.*
-
-## Deployment Configuration File
-The commands to deploy the POD can be customized via a *deployment
-configuration file*. The file is in [YAML](http://yaml.org/). For
-the purposes of the quick start using vagrant VMs for the POD nodes a
-deployment configuration file has been provide in
-[`config/default.yml`](config/default.yml). This default configuration
-specifies the target server as the vagrant machine named `prod` that was
-created earlier.
-
-## Prime the Target server
-The target server is the server that will assume the role of the head node in
-the cord POD. Priming this server consists of deploying some base software that
-is required to deploy the base software, such as a docker registry. Having the
-docker registry on the target server allows the deployment process to push
-images to the target server that are used in the reset of the process, thus
-making the head node a self contained deployment.
-```
-./gradlew prime
-```
-
-### Complete
-Once the `prime` command successfully runs this task is complete. When this
-step is complete a Docker registry and Docker registry mirror. It can be
-verified that these are running by using the `docker ps` command on the
-producation head node VM.
-```
-docker ps --format 'table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}'
-CONTAINER ID IMAGE COMMAND CREATED AT
-5f1cbebe7e61 registry:2.4.0 "/bin/registry serve " 2016-07-13 17:03:08 +0000 UTC
-6d3a911e5323 registry:2.4.0 "/bin/registry serve " 2016-07-13 17:03:08 +0000 UTC
-```
-
-## Publish
-Publishing consists of *pushing* the build docker images to the Docker
-repository on the target head node. This step can take a while as it has to
-transfer all the image from the development machine to the target head node.
-This step is started with the following command:
-```
-./gradlew -PtargetReg=10.100.198.201:5000 publish
-```
-
-### Complete
-Once the `publish` command successfully runs this task is complete. When this
-step is complete it can be verified by performing a query on the target
-server's Docker registry using the following command on the development
-machine.
-```
-curl -sS http://10.100.198.201:5000/v2/_catalog | jq .
-{
- "repositories": [
- "consul",
- "cord-dhcp-harvester",
- "cord-ip-allocator",
- "cord-maas-automation",
- "cord-maas-bootstrap",
- "cord-maas-switchq",
- "cord-provisioner",
- "mavenrepo",
- "nginx",
- "onosproject/onos",
- "swarm"
- ]
-}
-```
-
-## Deploy Bare Metal Provisioning Capabilities
-There are three parts to deploying bare metal: deploying the head node PXE
-server (`MAAS`), PXE booting a compute node, and post deployment provisioning
-of the compute node. These tasks are accomplished utilizing additionally
-Vagrant machines as well as executing `gradle` tasks in the Vagrant
-development machine.
-
-### VirtualBox Power Management
-The default MAAS deployment does not support power management for virtual box
-based hosts. As part of the MAAS installation support was added for power
-management, but it does require some additional configuration. This additional
-configuration is detailed at the end of this document, but is mentioned here
-because when deploying the head node an additional parameter must be set. This
-parameter specifies the username on the host machine that should be used when
-SSHing from the head node to the host machine to remotely execute the
-`vboxmanage` command. This is typically the username used when logging into your
-laptop or desktop development machine. This value is set by editing the
-`config/default.yml` file and replacing the default value of
-`seedServer.power_helper_user` with the approriate username. The default value
-is `cord`
-
-### Deploy MAAS
-Canonical MAAS provides the PXE and other bare metal provisioning services for
-CORD and will be deployed on the head node.
-```
-./gradlew deployBase
-```
-
-This task can take some time so be patient. It should complete without errors,
-so if an error is encountered something went horrible wrong (tm).
-
-### Complete
-This step is complete when the command successfully runs. The Web UI for MAAS
-can be viewed by browsing to the vagrant machine named `prod`. Because this
-machine is on a host internal network it can't be directly reached from the
-host machine, typically your laptop. In order to expose the UI, from the VM host
-machine you will issue the following command:
-```
-vagrant ssh prod -- -L 8080:localhost:80
-```
-This command will create a SSH tunnel from the VM host machine to the head
-node so that from the VM host you can view the MAAS UI by visiting the URL
-`http://localhost:8080/MAAS`. The default authentication credentials are a
-username of `cord` and a password of `cord`.
-
-After the `deployBase` command install `MAAS`, it initiates the download of
-an Ubuntu 14.04 boot image that will be used to boot the other POD servers.
-This download can take some time and the process cannot continue until the
-download is complete. The status of the download can be verified through
-the UI by visiting the URL `http://localhost:8888/MAAS/images/`, or via the
-command line from head node via the following commands:
-```
-APIKEY=$(sudo maas-region-admin apikey --user=cord)
-maas login cord http://localhost/MAAS/api/1.0 "$APIKEY"
-maas cord boot-resources read | jq 'map(select(.type != "Synced"))'
-```
-
-It the output of of the above commands is not a empty list, `[]`, then the
-images have not yet been completely downloaded. depending on your network speed
-this could take several minutes. Please wait and then attempt the last command
-again until the returned list is empty, `[]`. When the list is empty you can
-proceed.
-
-Browse around the UI and get familiar with MAAS via documentation at `http://maas.io`
-
-## Deploy XOS
-XOS provides service provisioning and orchestration for the CORD POD. To deploy
-XOS to the head node use the following command:
-```
-./gradlew deployPlatform
-```
-
-This task can take some time so be patient. It should complete without errors,
-so if an error is encountered something went horrible wrong (tm).
-
-### Complete
-This step is complete when the command successfully runs. The deployment of XOS
-includes a deployment of Open Stack.
-
-## Create and Boot Compute Node
-The sample vagrant VM based POD is configured to support the creation of 3
-compute nodes. These nodes will PXE boot from the head node and are created
-using the `vagrant up` command as follows:
-```
-vagrant up compute_node1
-vagrant up compute_node2
-vagrant up compute_node3
-```
-**NOTE:** *This task is executed on your host machine and not in the
-development virtual machine*
-
-When starting the compute node VMs the console (UI) for each will be displayed
-so you are able to watch the boot process if you like.
-
-As vagrant starts these machines, you will see the following error:
-```
-==> compute_node1: Waiting for machine to boot. This may take a few minutes...
-The requested communicator 'none' could not be found.
-Please verify the name is correct and try again.
-```
-
-This error is normal and is because vagrant attempts to `SSH` to a server
-after it is started. However, because the process is PXE booting these servers
-this is not possible. To work around (with) vagrant the vagrant `communicator`
-setting for each of the compute nodes is set to "none", thus vagrant complains,
-but the machines will PXE boot.
-
-The compute node VM will boot, register with MAAS, and then be shut off. After
-this is complete an entry for the node will be in the MAAS UI at
-`http://localhost:8888/MAAS/#/nodes`. It will be given a random hostname made
-up, in the Canonical way, of a adjective and an noun, such as
-`popular-feast.cord.lab`. *The name will be different for every deployment.*
-The new node will be in the `New` state.
-
-If you have properly configured power management for virtualbox (see below) the
-host will be automatically transitioned from `New` through the states of
-`Commissioning` and `Acquired` to `Deployed`.
-
-### Post Deployment Provisioning of the Compute Node
-Once the node is in the `Deployed` state, it will be provisioned for use in a
-CORD POD by the execution of an `Ansible` playbook.
-
-### Complete
-Once the compute node is in the `Deployed` state and post deployment provisioning on the compute node is
-complete, this task is complete.
-
-Logs of the post deployment provisioning of the compute nodes can be found
-in `/etc/maas/ansible/logs` on the head node.
-
-Assitionally, the post deployment provisioning of the compute nodes can be
-queried from the provision service using curl
-```
-curl -sS http://$(docker inspect --format '{{.NetworkSettings.Networks.maas_default.IPAddress}}' provisioner):4243/provision/ | jq '[.[] | { "status": .status, "name": .request.Info.name}]'
-[
- {
- "message": "",
- "name": "steel-ghost.cord.lab",
- "status": 2
- },
- {
- "message": "",
- "name": "feline-shirt.cord.lab",
- "status": 2
- },
- {
- "message": "",
- "name": "yellow-plot.cord.lab",
- "status": 2
- }
-]
-```
-In the above a "status" of 2 means that the provisioning is complete. The
-other values that status might hold are:
- - `0` - Pending, the request has been accepted by the provisioner but not yet
- started
- - `1` - Running, the request is being processed and the node is being
- provisioned
- - `2` - Complete, the provisioning has been completed successfully
- - `3` - Failed, the provisioning has failed and the `message` will be
- populated with the exit message from provisioning.
-
-## Create and Boot False switch
-The VM based deployment includes the definition of a vagrant machine that will
-exercise some of the automation used to boot OpenFlow switches in the CORD POD.
-It accomplishes this by forcing the MAC address on the vagrant VM to be a MAC
-that is recognized as a supported OpenFlow switch.
-
-The POD automation will thus perform post provisioning on the this VM to
-download and install software to the device. This vagrant VM can be created
-with the following command:
-```
-vagrant up switch
-```
-
-### Complete
-This step is complete when the command completes successfully. You can verify
-the provisioning of the false switch by querying the provisioning service
-using curl.
-```
-curl -sS http://$(docker inspect --format '{{.NetworkSettings.Networks.maas_default.IPAddress}}' provisioner):4243/provision/cc:37:ab:00:00:01 | jq '[{ "status": .status, "name": .request.Info.name, "message": .message}]'
-[
- {
- "message": "",
- "name": "fakeswitch",
- "status": 2
- }
-]
-```
-In the above a "status" of 2 means that the provisioning is complete. The
-other values that status might hold are:
- - `0` - Pending, the request has been accepted by the provisioner but not yet
- started
- - `1` - Running, the request is being processed and the node is being
- provisioned
- - `2` - Complete, the provisioning has been completed successfully
- - `3` - Failed, the provisioning has failed and the `message` will be
- populated with the exit message from provisioning.
-
-#### Virtual Box Power Management
-Virtual box power management is implemented via helper scripts that SSH to the
-virtual box host and execute `vboxmanage` commands. For this to work The scripts
-must be configured with a username and host to utilize when SSHing and that
-account must allow SSH from the head node guest to the host using SSH keys such
-that no password entry is required.
-
-To enable SSH key based login, assuming that VirtualBox is running on a Linux
-based system, you can copy the MAAS ssh public key from
-`/var/lib/maas/.ssh/id_rsa.pub` on the head known to your accounts
-`authorized_keys` files. You can verify that this is working by issuing the
-following commands from your host machine:
-```
-vagrant ssh headnode
-sudo su - maas
-ssh yourusername@host_ip_address
-```
-
-If you are able to accomplish these commands the VirtualBox power management should operate correctly.
diff --git a/docs/quickstarts.md b/docs/quickstarts.md
new file mode 100644
index 0000000..b0cfc47
--- /dev/null
+++ b/docs/quickstarts.md
@@ -0,0 +1,47 @@
+# Quickstarts
+
+The section provides a short list of essential commands that can be used to deploy CiaB and a physical.
+
+>NOTE: Looking for the full Cord-in-a-Box (CiaB) installation guide? You can find it [here](install_ciab.md).
+
+>NOTE: Looking for the full physical pod installation guide? You can find it [here](install_pod.md).
+
+## Common step (both for CiaB and physical POD)
+<pre><code>cd ~ && \
+wget https://raw.githubusercontent.com/opencord/cord/{{ book.branch }}/scripts/cord-bootstrap.sh && \
+chmod +x cord-bootstrap.sh && \
+~/cord-bootstrap.sh -v</code></pre>
+
+Logout and log back in.
+
+## CORD-in-a-Box (CiaB)
+To install CiaB, type the following commands:
+
+```
+cd ~/cord/build && \
+make PODCONFIG=rcord-virtual.yml config && \
+make -j4 build |& tee ~/build.out
+```
+
+## Physical POD
+Following are the steps needed to install a physical POD.
+
+### Prepare the head node:
+
+```
+sudo adduser cord && \
+sudo adduser cord sudo && \
+echo 'cord ALL=(ALL) NOPASSWD:ALL' | sudo tee --append /etc/sudoers.d/90-cloud-init-users
+```
+
+### On the development machine:
+Create your POD configuration .yml file in ~/cord/build/podconfig.
+
+```
+cd ~/cord/build && \
+make PODCONFIG={YOUR_PODCONFIG_FILE.yml} config && \
+make -j4 build |& tee ~/build.out
+```
+
+### Compute nodes and fabric switches
+After a successful build, set the compute nodes and the switches to boot from PXE and manually reboot them. They will get automatically deployed.
\ No newline at end of file
diff --git a/docs/terminology.md b/docs/terminology.md
new file mode 100644
index 0000000..7ce4729
--- /dev/null
+++ b/docs/terminology.md
@@ -0,0 +1,38 @@
+#Terminology
+
+This guide uses the following terminology.
+
+* **POD**: A single physical deployment of CORD.
+
+* **Full POD**: A typical configuration, and is used as example in this Guide.
+A full CORD POD is composed by three servers, and four fabric switches.
+It makes it possibile to experiment with all the core features of CORD and it
+is what the community uses for tests.
+
+* **Half POD**: A minimum-sized configuration. It is similar to a full POD, but with less hardware. It consists of two servers (one head node and one compute node), and one fabric switch. It does not allow experimentation with all of the core features that
+CORD offers (e.g., a switching fabric), but it is still good for basic experimentation and testing.
+
+* **Development (Dev) machine**: This is the machine used
+to download, build and deploy CORD onto a POD.
+Sometimes it is a dedicated server, and sometime the developer's laptop.
+In principle, it can be any machine that satisfies the hardware and software
+requirements reported below.
+
+* **Development (Dev) VM**: Bootstrapping the CORD installation requires a lot of
+software to be installed and some non-trivial configurations to be applied.
+All this should happens on the dev machine.
+To help users with the process, CORD provides an easy way to create a
+VM on the dev machine with all the required software and configurations in place.
+
+* **Compute Node(s)**: A server in a POD that run VMs or containers associated with
+one or more tenant services. This terminology is borrowed from OpenStack.
+
+* **Head Node**: A compute node of the POD that runs also management services. This includes for example XOS (the orchestrator), two instances of ONOS
+(the SDN controller, one to control the underlay fabric, one to control the overlay), MAAS and all the services needed to automatically install and configure the rest of
+the POD devices.
+
+* **Fabric Switch**: A switch in a POD that interconnects other switch and server
+elements inside the POD.
+
+* **vSG**: The virtual Subscriber Gateway (vSG) is the CORD counterpart for existing
+CPEs. It implements a bundle of subscriber-selected functions, such as Restricted Access, Parental Control, Bandwidth Metering, Access Diagnostics and Firewall. These functionalities run on commodity hardware located in the Central Office rather than on the customer’s premises. There is still a device in the home (which we still refer to as the CPE), but it has been reduced to a bare-metal switch.
\ No newline at end of file