CORD-3064 Document M-CORD / OpenStack deployment procedure

Change-Id: Ib012cf11223c5ad13a11bb1af4ef53a115a9cd60
diff --git a/SUMMARY.md b/SUMMARY.md
index 0e6f012..affc07c 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -9,20 +9,23 @@
             * [Single Node KB8s](prereqs/k8s-single-node.md)
             * [Multi Node KB8s](prereqs/k8s-multi-node.md)
         * [Helm](prereqs/helm.md)
-        * [Docker Registry](prereqs/docker-registry.md)
+        * [Docker Registry (optional)](prereqs/docker-registry.md)
+        * [OpenStack Support (M-CORD)](prereqs/openstack-helm.md)
     * [Fabric setup](prereqs/fabric-setup.md)
     * [Install CORD](profiles/intro.md)
         * [RCORD Lite](profiles/rcord/install.md)
             * [OLT Setup](openolt/README.md)
         * [MCORD](profiles/mcord/install.md)
             * [EnodeB Setup](profiles/mcord/enodeb-setup.md)
-    * [OpenStack Integration](prereqs/openstack.md)
     * [Helm Reference](charts/helm.md)
         * [XOS-CORE](charts/xos-core.md)
         * [ONOS](charts/onos.md)
         * [VOLTHA](charts/voltha.md)
         * [Kafka](charts/kafka.md)
         * [Hippie OSS](charts/hippie-oss.md)
+        * [Base OpenStack](charts/base-openstack.md)
+            * [VTN Prerequisites](prereqs/vtn-setup.md)
+        * [M-CORD](charts/mcord.md)
 * [Operating CORD](operating_cord/operating_cord.md)
     * General info
         * [Diagnostics](operating_cord/diag.md)
diff --git a/charts/base-openstack.md b/charts/base-openstack.md
new file mode 100644
index 0000000..aa404bd
--- /dev/null
+++ b/charts/base-openstack.md
@@ -0,0 +1,64 @@
+# Deploying the Base Openstack Chart
+
+XOS can be configured to manage an existing OpenStack installation
+(e.g., deployed using [openstack-helm](../prereqs/openstack-helm.md)) by
+installing the `xos-profiles/base-openstack` Helm chart in the
+`helm-charts` repository.  This chart requires that the
+[xos-core](xos-core.md) chart has already been installed.
+
+## System Prerequisites for VTN
+
+This chart causes XOS to load the VTN app into ONOS and configure it.
+Prior to installing the chart, make sure that VTN's requirements are
+satisfied by following [this guide](../prereqs/vtn-setup.md)
+
+## Single-node configuration
+
+Here is an example of deploying the `xos-profiles/base-openstack` chart
+on a single-node OpenStack server set up by the
+`automation-tools/openstack-helm/openstack-helm-dev-setup.sh` script:
+
+```bash
+helm dep update xos-profiles/base-openstack
+helm install -n base-openstack xos-profiles/base-openstack \
+    --set computeNodes.master.name=`hostname` \
+    --set vtn-service.sshUser=`whoami`
+```
+
+## Multi-node configuration
+
+If you are deploying on a multi-node OpenStack cluster, create a YAML
+file containing information for each node, and pass it as an argument
+when installing the `xos-profiles/base-openstack` chart using the `-f`
+option.  An example `compute-nodes.yaml` file:
+
+```yaml
+computeNodes:
+  master:
+    name: node0.opencord.org
+    bridgeId: of:00000000abcdef01
+    dataPlaneIntf: fabric
+    dataPlaneIp: 10.6.1.1/24
+  node1:
+    name: node1.opencord.org
+    bridgeId: of:00000000abcdef02
+    dataPlaneIntf: fabric
+    dataPlaneIp: 10.6.1.2/24
+  node2:
+    name: node2.opencord.org
+    bridgeId: of:00000000abcdef03
+    dataPlaneIntf: fabric
+    dataPlaneIp: 10.6.1.3/24
+```
+
+The master node in the cluster should be called `master`; the other
+node labels can be anything.  For each node:
+
+* `name` is the OpenStack hypervisor name of the node (often the FQDN)
+* `bridgeId` is `of:` followed by a unique 16-digit hex string
+* `dataPlaneIntf` is the name of the fabric interface on the node.  This could be a bridge or a bond interface.
+* `dataPlaneIp` is the node's IP address and subnet mask on the fabric subnet
+
+When installing the `xos-profiles/base-openstack` chart it is also
+necessary to set the value of `vtn-service.sshUser` to the user account
+for which the public key was added to `authorized_keys` earlier.
diff --git a/charts/mcord.md b/charts/mcord.md
new file mode 100644
index 0000000..bcab325
--- /dev/null
+++ b/charts/mcord.md
@@ -0,0 +1,12 @@
+# Deploying the M-CORD profile chart
+
+To deploy the M-CORD profile chart:
+
+```shell
+helm dep update xos-profiles/mcord
+helm install -n mcord xos-profiles/mcord --set proxySshUser=ubuntu
+```
+
+The value of `proxySshUser` should be set to the user account corresponding
+to the public key added to the node when
+[prepping the nodes for VTN](../prereqs/vtn-setup.md).
diff --git a/charts/onos.md b/charts/onos.md
index 4b13e72..81f575c 100644
--- a/charts/onos.md
+++ b/charts/onos.md
@@ -36,7 +36,7 @@
 ## onos-vtn
 
 ```shell
-helm install -n onos-cord -f configs/onos-cord.yaml onos
+helm install -n onos-cord onos
 ```
 
 The configuration doesn't expose any nodeport.
diff --git a/prereqs/openstack-helm.md b/prereqs/openstack-helm.md
index f87b125..19ae7c6 100644
--- a/prereqs/openstack-helm.md
+++ b/prereqs/openstack-helm.md
@@ -1 +1,138 @@
-# OpenStack helm
+# OpenStack Support (M-CORD)
+
+The [openstack-helm](https://github.com/openstack/openstack-helm)
+project can be used to install a set of Kubernetes nodes as OpenStack
+compute nodes, with the OpenStack control services (nova, neutron,
+keystone, glance, etc.) running as containers on Kubernetes.
+Instructions for installing `openstack-helm` on a single node or a multi-node
+cluster can be found at [https://docs.openstack.org/openstack-helm/latest/index.html](https://docs.openstack.org/openstack-helm/latest/index.html).
+
+This page describes steps for installing `openstack-helm`, including how to
+customize the documented install procedure with specializations for CORD.
+CORD uses the VTN ONOS app to control Open vSwitch on the compute nodes
+and configure virtual networks between VMs on the OpenStack cluster.
+Neutron must be configured to pass control to ONOS rather than using
+`openvswitch-agent` to manage OvS.
+
+After the install process is complete, you won't yet have a
+fully-working OpenStack system; you will need to install the
+[base-openstack](../charts/base-openstack.md) chart first.
+
+## Single node quick start
+
+For convenience, a script to install Kubernetes, Helm, and `openstack-helm`
+on a _single Ubuntu 16.04 node_ is provided in the `automation-tools`
+repository.  This script also customizes the install as described
+below.
+
+```bash
+git clone https://gerrit.opencord.org/automation-tools
+automation-tools/openstack-helm/openstack-helm-dev-setup.sh
+```
+
+If you run this script you can skip the instructions on the rest of
+this page.
+
+## Customizing the openstack-helm install for CORD
+
+In order to enable the VTN app to control Open vSwitch on the compute
+nodes, it is necessary to customize the `openstack-helm` installation.
+The customization occurs through specifiying `values.yaml` files to use
+when installing the Helm charts.
+
+The `openstack-helm` installation process designates one node as the
+master node; the Helm commands are run on this node.  The following
+values files should be created on the master node prior to installing
+the `openstack-helm` charts.
+
+```bash
+cat <<EOF > /tmp/glance-cord.yaml
+---
+network:
+  api:
+    ingress:
+      annotations:
+        nginx.ingress.kubernetes.io/proxy-body-size: "0"
+EOF
+export OSH_EXTRA_HELM_ARGS_GLANCE="-f /tmp/glance-cord.yaml"
+```
+
+```bash
+cat <<EOF > /tmp/nova-cord.yaml
+---
+labels:
+  api_metadata:
+    node_selector_key: openstack-helm-node-class
+    node_selector_value: primary
+network:
+  backend: []
+pod:
+  replicas:
+    api_metadata: 1
+    placement: 1
+    osapi: 1
+    conductor: 1
+    consoleauth: 1
+    scheduler: 1
+    novncproxy: 1
+EOF
+export OSH_EXTRA_HELM_ARGS_NOVA="-f /tmp/nova-cord.yaml"
+```
+
+```bash
+cat <<EOF > /tmp/neutron-cord.yaml
+---
+images:
+  tags:
+    neutron_server: xosproject/neutron-onos:newton
+manifests:
+  daemonset_dhcp_agent: false
+  daemonset_l3_agent: false
+  daemonset_lb_agent: false
+  daemonset_metadata_agent: false
+  daemonset_ovs_agent: false
+  daemonset_sriov_agent: false
+network:
+  backend: []
+  interface:
+    tunnel: "eth0"
+pod:
+  replicas:
+    server: 1
+conf:
+  plugins:
+    ml2_conf:
+      ml2:
+        type_drivers: vxlan
+        tenant_network_types: vxlan
+        mechanism_drivers: onos_ml2
+      ml2_type_vxlan:
+        vni_ranges: 1001:2000
+      onos:
+        url_path: http://onos-cord-ui.default.svc.cluster.local:8181/onos/cordvtn
+        username: onos
+        password: rocks
+EOF
+export OSH_EXTRA_HELM_ARGS_NEUTRON="-f /tmp/neutron-cord.yaml"
+```
+
+## Install process for openstack-helm
+
+Please see the `openstack-helm` documentation for instructions on how to
+install openstack-helm on a single node (for development and testing) or
+a multi-node cluster.
+
+* [system requirements](https://docs.openstack.org/openstack-helm/latest/install/developer/requirements-and-host-config.html)
+* [single-node installation](https://docs.openstack.org/openstack-helm/latest/install/developer/index.html)
+* [multi-node cluster](https://docs.openstack.org/openstack-helm/latest/install/multinode.html)
+
+The install process is flexible and fairly modular; see the links
+above for more information.  At a high level, it involves running
+scripts to:
+
+* Install software like Kubernetes and Helm
+* Build the Helm charts and install them in a local Helm repository
+* Install requried packages
+* Configure DNS on the nodes
+* Generate `values.yaml` files based on the environment and install Helm charts using these files
+* Run post-install tests on the OpenStack services
diff --git a/prereqs/openstack.md b/prereqs/openstack.md
deleted file mode 100644
index 0e9b1ce..0000000
--- a/prereqs/openstack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-# OpenStack Integration
-
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Quasi corporis
-officia dolorum fugit eligendi obcaecati earum, quam reprehenderit optio
-consectetur quaerat voluptates asperiores aut vel laudantium soluta laboriosam
-iure culpa.
diff --git a/prereqs/software.md b/prereqs/software.md
index f847e2a..ac8aed1 100644
--- a/prereqs/software.md
+++ b/prereqs/software.md
@@ -4,6 +4,13 @@
 
 As such, you can choose what operating system to use, how to configure it, and how to install Kubernetes on it.
 
-**M-CORD is the exception**, since part of its components still run on OpenStack. OpenStack is deployed as a set of Kubernetes containers. Anyway these containers require a special version of Kubernetes and additional configurations. You can find more informations about this in the M-CORD installation sections.
+**M-CORD is the exception**,
+since its components still run on OpenStack. OpenStack is
+deployed as a set of Kubernetes containers using the
+[openstack-helm](https://github.com/openstack/openstack-helm)
+project. Successfully installing the OpenStack Helm charts requires
+some additional system configuration besides just installing Kubernetes
+and Helm. You can find more informations about this in the [OpenStack
+Support](./openstack-helm.md) installation section.
 
 Following sections describe what specifically CORD containers require and some pointers to DEMO automated-installation scripts.
diff --git a/prereqs/vtn-setup.md b/prereqs/vtn-setup.md
new file mode 100644
index 0000000..3608cbf
--- /dev/null
+++ b/prereqs/vtn-setup.md
@@ -0,0 +1,43 @@
+# VTN Prerequisites
+
+The ONOS VTN app provides virtual networking between VMs on an OpenStack cluster.  Prior to installing the [base-openstack](../charts/base-openstack.md) chart that installs and configures VTN, make sure that the following requirements are satisfied.
+
+First, VTN requires the ability to SSH to each compute node _using an account with passwordless `sudo` capability_.  Before installing this chart, first create an SSH keypair and copy it to the `authorized_keys` files of all nodes in the cluster:
+
+Generate a keypair:
+
+```bash
+ssh-keygen -t rsa
+```
+
+Copy the public key for user `ubuntu` to `node1.opencord.org` (example):
+
+```shell
+ssh-copy-id ubuntu@node1.opencord.org
+```
+
+Copy the private key so that the [base-openstack](../charts/base-openstack.md) chart can publish it as a secret:
+
+```shell
+cp ~/.ssh/id_rsa xos-profiles/base-openstack/files/node_key
+```
+
+The VTN app requires a fabric interface on the compute nodes.  VTN will not successfully initialize if this interface is not present. By default the name of this interface is expected to be named `fabric`. If there is not an actual fabric interface on the compute node, create a dummy interface as follows:
+
+```shell
+sudo modprobe dummy
+sudo ip link set name fabric dev dummy0
+sudo ifconfig fabric up
+```
+
+Finally, on each compute node, Open vSwitch must be configured to listen for
+remote connections so that it can be controlled by VTN.  Example:
+
+```shell
+PODS=$( kubectl get pod --namespace openstack|grep openvswitch-db|awk '{print $1}' )
+for POD in $PODS
+do
+  kubectl --namespace openstack exec "$POD" \
+      -- ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6641
+done
+```
diff --git a/profiles/mcord/install.md b/profiles/mcord/install.md
index e513eec..79e8efb 100644
--- a/profiles/mcord/install.md
+++ b/profiles/mcord/install.md
@@ -1,23 +1,107 @@
-# MCORD
+# M-CORD
 
 ## Prerequisites
 
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Nobis veritatis
-eligendi vitae dolorem animi non unde odio, hic quasi totam recusandae repellat
-minima provident aliquam eveniet a tempora saepe. Iusto.
+M-CORD requires OpenStack to run VNFs.  The OpenStack installation must be customized with the *onos_ml2* Neutron plugin.
 
-- A Kubernetes cluster (you will need a [multi nodecluster](../../prereqs/k8s-multi-node.md))
-- Helm, follow [this guide](../../prereqs/helm.md)
-- Openstack-Helm, follow [this guide](../../prereqs/openstack-helm.md)
+- To install Kubernetes, Helm, and a customized Openstack-Helm on a single node or a multi-node cluster, follow [this guide](../../prereqs/openstack-helm.md)
+- To configure the nodes so that VTN can provide virtual networking for OpenStack, follow [this guide](../../prereqs/vtn-setup.md)
 
 ## CORD Components
 
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Fugit et quam tenetur
-maiores dolores ipsum hic ex doloremque, consectetur porro sequi vitae tempora
-in consequuntur provident nostrum nobis. Error, non?
-
-Then you need to install this charts:
+Bring up the M-CORD controller by installing the following charts in order:
 
 - [xos-core](../../charts/xos-core.md)
-- [onos-fabric](../../charts/onos.md#onos-fabric)
+- [base-openstack](../../charts/base-openstack.md)
 - [onos-vtn](../../charts/onos.md#onos-vtn)
+- [onos-fabric](../../charts/onos.md#onos-fabric)
+- [mcord](../../charts/mcord.md)
+
+## Validating the Installation
+
+Before creating any VMs, check to see that VTN has initialized the nodes
+correctly.  On the OpenStack Helm master node run:
+
+```bash
+# password: rocks
+ssh -p 8101 onos@onos-cord-ssh.default.svc.cluster.local cordvtn-nodes
+```
+
+You should see all nodes in `COMPLETE` state.
+
+Next, check that the VNF images are loaded into OpenStack (they are quite large
+so this may take a while to complete):
+
+```bash
+export OS_CLOUD=openstack_helm
+openstack image list
+```
+
+You should see output like the following:
+
+```text
++--------------------------------------+-----------------------------+--------+
+| ID                                   | Name                        | Status |
++--------------------------------------+-----------------------------+--------+
+| b648f563-d9a2-4770-a6d8-b3044e623366 | Cirros 0.3.5 64-bit         | active |
+| 4287e01f-93b5-497f-9099-f526cb2044ac | image_hss_v0.1              | active |
+| e82e459c-27b4-417e-9f95-19ba3cc3fd9d | image_hssdb_v0.1            | active |
+| c62ab4ce-b95b-4e68-a708-65097c7bbe46 | image_internetemulator_v0.1 | active |
+| f2166c56-f772-4614-8bb5-cb848f9d23e3 | image_mme_v0.1              | active |
+| 472b7f9a-f2be-4c61-8085-8b0d37182d32 | image_sdncontroller_v0.1    | active |
+| 7784877f-e45c-4b1a-9eac-478efdb368cc | image_spgwc_v0.1            | active |
+| b9e2ec93-3177-458b-b3b2-c5c917f2fbcd | image_spgwu_v0.1            | active |
++--------------------------------------+-----------------------------+--------+
+```
+
+To create a virtual EPC, on the master node run:
+
+```bash
+sudo apt install httpie
+http -a admin@opencord.org:letmein POST http://xos-gui.default.svc.cluster.local:4000/xosapi/v1/vepc/vepcserviceinstances blueprint=mcord_5 site_id=1
+```
+
+Check that the networks are created:
+
+```bash
+export OS_CLOUD=openstack_helm
+openstack network list
+```
+
+You should see output like the following:
+
+```text
++--------------------------------------+--------------------+--------------------------------------+
+| ID                                   | Name               | Subnets                              |
++--------------------------------------+--------------------+--------------------------------------+
+| 0bc8cb20-b8c7-474c-a14d-22cc4c49cde7 | s11_network        | da782aac-137a-45ae-86ee-09a06c9f3e56 |
+| 5491d2fe-dcab-4276-bc1a-9ab3c9ae5275 | management         | 4037798c-fd95-4c7b-baf2-320237b83cce |
+| 65f16a5c-f1aa-45d9-a73f-9d25fe366ec6 | s6a_network        | f5804cba-7956-40d8-a015-da566604d0db |
+| 6ce9c7e9-19b4-45fd-8e23-8c55ad84a7d7 | spgw_network       | 699829e1-4e67-46a7-af2d-c1fc72ba988e |
+| 87ffaaa3-e2a9-4546-80fa-487a256781a4 | flat_network_s1u   | 288d6a8c-8737-4e0e-9472-c869ba3e7c92 |
+| 8ec59660-4751-48de-b4a3-871f4ff34d81 | db_network         | 6f14b420-0952-4292-a9f2-cfc8b2d6938e |
+| d63d3490-b527-4a99-ad43-d69412b315b9 | sgi_network        | b445d554-1a47-4f3b-a46d-1e15a01731c0 |
+| dac99c3e-3374-4b02-93a8-994d025993eb | flat_network_s1mme | 32dd201c-8f7f-4e11-8c42-4f05734f716a |
++--------------------------------------+--------------------+--------------------------------------+
+```
+
+Check that the VMs are created (it will take a few minutes for them to come up):
+
+```bash
+export OS_CLOUD=openstack_helm
+openstack server list --all-projects
+```
+
+You should see output like the following:
+
+```text
++--------------------------------------+-----------------+--------+----------------------------------------------------------------------------------------------------+------------------+-----------+
+| ID                                   | Name            | Status | Networks                                                                                           | Image            | Flavor    |
++--------------------------------------+-----------------+--------+----------------------------------------------------------------------------------------------------+------------------+-----------+
+| 7e197142-afb1-459d-b421-cad91306d19f | mysite_vmme-2   | ACTIVE | s6a_network=120.0.0.9; flat_network_s1mme=118.0.0.5; management=172.27.0.15; s11_network=112.0.0.2 | image_mme_v0.1   | m1.large  |
+| 9fe385f5-a064-40e0-94d3-17ea87b955fc | mysite_vspgwu-1 | ACTIVE | management=172.27.0.5; sgi_network=115.0.0.3; spgw_network=117.0.0.3; flat_network_s1u=119.0.0.10  | image_spgwu_v0.1 | m1.xlarge |
+| aa6805fe-3d72-4f1e-a2eb-5546d7916073 | mysite_hssdb-5  | ACTIVE | management=172.27.0.13; db_network=121.0.0.12                                                      | image_hssdb_v0.1 | m1.large  |
+| e53138ed-2893-4073-9c9a-6eb4aa1892f1 | mysite_vhss-4   | ACTIVE | s6a_network=120.0.0.2; management=172.27.0.4; db_network=121.0.0.5                                 | image_hss_v0.1   | m1.large  |
+| 4a5960b5-b5e4-4777-8fe4-f257c244f198 | mysite_vspgwc-3 | ACTIVE | management=172.27.0.7; spgw_network=117.0.0.8; s11_network=112.0.0.4                               | image_spgwc_v0.1 | m1.large  |
++--------------------------------------+-----------------+--------+----------------------------------------------------------------------------------------------------+------------------+-----------+
+```