Adding deployment docs / major docs refactor

Change-Id: Id6ea6ff780f13abdfe3c9f6fc6b7a8feddbcc60d
diff --git a/SUMMARY.md b/SUMMARY.md
index 81d4de5..2fe6543 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -1,27 +1,27 @@
 # Summary
 
 * [Installation Guide](README.md)
-    * [Bill Of Materials](prereqs/hardware.md)
+    * [Hardware Requirements](prereqs/hardware.md)
     * [Networking Connectivity](prereqs/networking.md)
-    * Software Requirements
-        * Kubernetes
+    * [Software Requirements](prereqs/software.md)
+        * [Kubernetes](prereqs/kubernetes.md)
             * [Single Node KB8s](prereqs/minikube.md)
             * [Multi Node KB8s](prereqs/kubespray.md)
         * [Helm](prereqs/helm.md)
         * [Docker Registry](prereqs/docker-registry.md)
-        * [OpenStack Integration](prereqs/openstack-helm.md)
-    * [Profiles](profiles/intro.md)
+    * [Fabric setup](prereqs/fabric-setup.md)
+    * [Install CORD](profiles/intro.md)
         * [RCORD Lite](profiles/rcord-lite.md)
             * [OLT Setup](profiles/olt-setup.md)
         * [MCORD](profiles/mcord.md)
             * [EnodeB Setup](profiles/enodeb-setup.md)
+    * [OpenStack Integration](openstack.md)
     * [Helm Reference](charts/helm.md)
         * [XOS-CORE](charts/xos-core.md)
         * [ONOS](charts/onos.md)
         * [VOLTHA](charts/voltha.md)
-        * [kafka](charts/kafka.md)
-    * [Fabric setup](prereqs/fabric-setup.md)
-* Operating CORD
+        * [Kafka](charts/kafka.md)
+* [Operating CORD](operating_cord/operating_cord.md)
     * [Diagnostics](operating_cord/diag.md)
 * [Defining Models in CORD](xos/README.md)
     * [XOS Support for Models](xos/dev/xproto.md)
@@ -30,7 +30,7 @@
     * [Writing Synchronizers](xos/dev/synchronizers.md)
         * [Design Guidelines](xos/dev/sync_arch.md)
         * [Implementation Details](xos/dev/sync_impl.md)
-* Developing for CORD
+* [Development Guide](developer/developer.md)
     * [Getting the Source Code](developer/getting_the_code.md)
     * [Developer Workflows](developer/workflows.md)
     * [VTN and Service Composition](xos/xos_vtn.md)
diff --git a/developer/developer.md b/developer/developer.md
new file mode 100644
index 0000000..d66f3cb
--- /dev/null
+++ b/developer/developer.md
@@ -0,0 +1 @@
+# Development Guide
\ No newline at end of file
diff --git a/images/data_net.png b/images/data_net.png
new file mode 100644
index 0000000..7b76214
--- /dev/null
+++ b/images/data_net.png
Binary files differ
diff --git a/images/mgmt_net.png b/images/mgmt_net.png
new file mode 100644
index 0000000..ad503d0
--- /dev/null
+++ b/images/mgmt_net.png
Binary files differ
diff --git a/openstack.md b/openstack.md
new file mode 100644
index 0000000..0e9b1ce
--- /dev/null
+++ b/openstack.md
@@ -0,0 +1,6 @@
+# OpenStack Integration
+
+Lorem ipsum dolor sit amet, consectetur adipisicing elit. Quasi corporis
+officia dolorum fugit eligendi obcaecati earum, quam reprehenderit optio
+consectetur quaerat voluptates asperiores aut vel laudantium soluta laboriosam
+iure culpa.
diff --git a/operating_cord/operating_cord.md b/operating_cord/operating_cord.md
new file mode 100644
index 0000000..49966c6
--- /dev/null
+++ b/operating_cord/operating_cord.md
@@ -0,0 +1 @@
+# Operating CORD
\ No newline at end of file
diff --git a/prereqs/docker-registry.md b/prereqs/docker-registry.md
index 6b4494f..fb6586f 100644
--- a/prereqs/docker-registry.md
+++ b/prereqs/docker-registry.md
@@ -1,37 +1,35 @@
 # Docker Registry
 
-This guide will help you in deploying an insecure `docker-registry`.
+The guide describes how to install an insecure *docker-registry* in Kubernetes.
 The tipical usecases for such registry are development, POCs or lab trials.
 
-> **Please be aware that this is NOT intended for production use**
+> **This is not meant for production use**
 
 ## What is a docker registry?
 
 If you have ever used docker, for sure you have used a docker registry.
-The most used `docker-registry` is the default public one: `hub.docker.com`
+The most used *docker-registry* is the default, public one: <https://hub.docker.com>.
 
-In certain cases, such as development or when the public registry is not
-reachable, you may want to setup a private version on it, to push and pull
-your images in a more controlled way.
+In some cases, such as development or when the public registry is not
+reachable, you may want to setup a private registry, to push and pull images in a more controlled way.
 
-For more information about docker registries, please take a look
-at the [official documentation](https://docs.docker.com/registry/).
+More information about docker registries at <https://docs.docker.com/registry/>.
 
-## Deploy an insecure docker registry on top of Kubernets
+## Deploy an insecure docker registry on top of Kubernetes
 
-We suggest to use the official helm-chart to deploy a docker-registry,
-and this command will deploy it and expose it on port `30500`:
+Helm provides a default helm-chart to deploy the registry,
+The follogin command deploys it and exposes it on node port *30500*:
 
 ```shell
 helm install stable/docker-registry --set service.nodePort=30500,service.type=NodePort -n docker-registry
 ```
 
-> In any moment you can check the images available on your registry with this
-> command:
+The registry can be queried at any time:
+
 > ```shell
 > curl -X GET https://KUBERNETES_IP:30500/v2/_catalog
 > ```
 
-## Push the images to the docker registry
+## Push images to the docker registry
 
 {% include "/partials/push-images-to-registry.md" %}
diff --git a/prereqs/fabric-setup.md b/prereqs/fabric-setup.md
index 85295d2..947aea5 100644
--- a/prereqs/fabric-setup.md
+++ b/prereqs/fabric-setup.md
@@ -1 +1 @@
-# How to install fabric switches
\ No newline at end of file
+# How to install fabric switches
diff --git a/prereqs/hardware.md b/prereqs/hardware.md
index 5bb36b1..a177f09 100644
--- a/prereqs/hardware.md
+++ b/prereqs/hardware.md
@@ -1,12 +1,88 @@
-# Hardware Requirements
+# Hardware requirements
 
-## BOM
+To build CORD you'll need different hardware components, depending on the specific requirements of your deployment.
 
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Sed vitae vel
-reiciendis, adipisci voluptatum perferendis voluptas blanditiis, eos inventore
-maiores ipsam facere aliquid ex repudiandae itaque praesentium mollitia at,
-architecto.
+## Generic hardware guidelines
 
-## RCORD Specifics
+* **Compute machines**: CORD can be in principle deployed both on any x86 machine, either physical or virtual. For development, demos or lab trials you may want to use only one machine (even your laptop could be fine, as long as it can provide enough hardware resources). For more realistic deployments it's anyway suggested to use at least three machines; better if all equals to one each other. The characteristics of these machines depends by lots of factors. At high level, at the very minimum, each machine should have a 4 cores CPU, 32GB of RAM and 100G of disk capacity. More sophisticated use-cases, for example M-CORD require more resources. Look at paragraphs below for more informations.
 
-## MCORD Specifics
+* **Network cards**: Whatever server you want to use, it should have at the very minimum a 1G network interface for management.
+
+* **Fabric switches**: Fabric switches should be compatible with the ONOS Trellis application that controls them. In this case, it's strongly suggested to stick with one of the models suggested, depending on the requirements. 10G switches are usually preferred for initial functional tests / lab deployments, since cheaper. Moreover, 10G ports can be usually downgraded to 1G speed, and the user can connect copper SFPs to them. The number of switches largely depends by your needs. For basic scenarios one may be enough, for more complete fabric tests, it's suggested to use at least four switches. More for more complex deployments. Developers sometimes emulate the fabric in software (using Mininet), but this can only apply to specific use-cases.
+
+* **Access equipment**: At the moment, both R-CORD and M-CORD work with very specific access equipment. It's strongly suggested to stick with the models suggested in the following paragraphs.
+
+* **Optics and cabling**: Some hardware may be picky on the optics. Both optics and cable models tested by the community are provided below.
+
+* **Other**: Besides all above, you will need a development/management machine and a L2 management swich to connect things together. Usually a laptop is enough for the former, and a legacy L2 switch is enough for the latter.
+
+## Suggested hardware
+
+Following is a list of hardware that people from the ONF community have tested over time in lab trials.
+
+* **Compute machines**
+    * OCP Inspired&trade; QuantaGrid D51B-1U server. Each
+    server is configured with 2x Intel E5-2630 v4 10C 2.2GHz 85W, 64GB of RAM 2133MHz DDR4, 2x 500GB HDD, and a 40 Gig adapter.
+
+* **Fabric Switches**
+    * **1G/10G** models (with 40G uplinks)
+        * OCP Accepted&trade; EdgeCore AS5712-54X
+        * OCP Accepted&trade; EdgeCore AS5812-54X
+        * QuantaMesh T3048-LY8
+    * **40G** models
+        * OCP Accepted&trade; EdgeCore AS6712-32X
+    * **100G** models
+        * OCP Accepted&trade; EdgeCore AS7712-32X
+        * QuantaMesh BMS T7032-IX1/IX1B
+
+* **Fabric optics and DACs**
+    * **10G DACs**
+        * Robofiber QSFP-10G-03C SFP+ 10G direct attach passive
+        copper cable, 3m length - S/N: SFP-10G-03C
+    * **40G DACs**
+        * Robofiber QSFP-40G-03C QSFP+ 40G direct attach passive
+        copper cable, 3m length - S/N: QSFP-40G-03C
+
+* **R-CORD access equipment and optics**
+    * **XGS-PON**
+        * **OLT**: EdgeCore ASFVOLT16 (for more info <bartek_raszczyk@edge-core.com>)
+        * Compatible **OLT optics**
+            * Hisense/Ligent: LTH7226-PC, LTH7226-PC+
+            ** Source Photonics: XPP-XG2-N1-CDFA
+        * **ONU**: AlphaNetworks PON-34000B (for more info <ed-y_chen@alphanetworks.com>)
+        * Compatible **ONU optics**
+            * Hisense/Ligent: LTF7225-BC, LTF7225-BH+
+
+* **M-CORD specific requirements**
+    * **Servers**: Some components of CORD require at least a Intel XEON CPU with Haswell microarchitecture or better.
+    * **eNodeBs**:
+        * Cavium Octeon Fusion CNF7100 (for more info <kin-yip.liu@cavium.com>)
+
+## BOM examples
+
+Following are some BOM examples you may hopefully take inspiration form.
+
+### Basic lab tests
+
+The goal is to try CORD, maybe modify/develop some software, and deploy locally in a lab.
+
+* 1x x86 server (maybe with a 10G interface if need to support VNFs)
+* 1x fabric switch (10G)
+* 1 DAC cables (if need to support VNFs)
+* Ethernet copper cables as needed
+* Access equipment as needed
+* 1x or more developers' workstations (i.e. laptop) to develop and deploy
+* 1x L2 legacy management switch
+
+### More complex lab tests
+
+Want to make sure you have a good representation of a realistic deployment. Want to run in a lab more complex deployments and tests.
+
+* 3x x86 server (maybe 10G/40G/100G interfaces if need to support VNFs)
+* 4x fabric switches (10G/40G/100G)
+* 7 DAC cables + 3 to connect servers (if need to support VNFs)
+* Ethernet copper cables as needed
+* Access equipment as needed
+* 1 or more developers' workstations (i.e. laptop) to develop and deploy
+* Alternatively a management/development server
+* 1x L2 legacy management switch
diff --git a/prereqs/helm.md b/prereqs/helm.md
index 788d0d5..1596ca5 100644
--- a/prereqs/helm.md
+++ b/prereqs/helm.md
@@ -1,52 +1,50 @@
 # Helm Installation guide
 
+The paragraph assumes that *Kubernetes* has already been installed and *kubectl* can access the pod.
+
+CORD uses helm to deploy containers on Kubernetes. As such helm should be installed before trying to deploy any CORD container.
+
+Helm documentation can be found at <https://docs.helm.sh/>
+
 ## What is helm?
 
 {% include "/partials/helm/description.md" %}
 
-## How to install helm
+## Install helm (and tiller)
 
-The full instructions and basic commands to get started with helm can be found
-here: <https://docs.helm.sh/using_helm/#quickstart>
+Helm is made of two components:
 
-For simplicity here are are few commands that you can use to install `helm` on
-your system:
+* the helm client, most times also called simply helm: the client component, basically the CLI utility
+* tiller: the server side component, interpreting the client commands and executing tasks on the Kubernetes pod
 
-### macOS
+Helm can be installed on any device able to reach the Kubernetes POD (i.e. the developer laptop, another server in the network). Tiller should be installed on the Kubernetes pod itself, through the kubectl CLI.
 
-```shell
-brew install kubernetes-helm
-```
+### Install helm client
 
-### Linux
+Follow the instructions at <https://docs.helm.sh/using_helm/#installing-helm>
 
-```shell
-wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
-tar -zxvf helm-v2.9.1-linux-amd64.tar.gz
-mv linux-amd64/helm /usr/local/bin/helm
-```
+### Install tiller
 
-### Initialize helm and setup Tiller
+To install tiller type the following commands from any device already able to access the Kubernetes pod.
 
 ```shell
 helm init
+kubectl create serviceaccount --namespace kube-system tiller
+kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
+kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'      
+helm init --service-account tiller --upgrade
 ```
 
-Once `helm` is installed you should be able to run the command `helm list`
-without errors
+Once *helm* and *tiller* are installed you should be able to run the command *helm ls* without errors.
 
-## What is an helm chart?
+## What helm charts are?
 
-Charts are the packaging format used by helm.
-A chart is a collection of files that describe
-a related set of Kubernetes resources.
+Helm charts are the packaging format used by helm. A chart is a collection of files that describe a related set of Kubernetes resources.
 
-For example in CORD we are using charts to define every single components,
-such as:
+CORD uses charts to define each component. For example:
 
-- [xos-core](../charts/xos-core.md)
-- [onos](../charts/onos.md)
-- [voltha](../charts/voltha.md)
+* [xos-core](../charts/xos-core.md)
+* [onos](../charts/onos.md)
+* [voltha](../charts/voltha.md)
 
-You can find the full chart documentation here:
-<https://docs.helm.sh/developing_charts/#charts>
+More info on Helm charts at <https://docs.helm.sh/developing_charts/#charts>
diff --git a/prereqs/kubernetes.md b/prereqs/kubernetes.md
new file mode 100644
index 0000000..d9d8666
--- /dev/null
+++ b/prereqs/kubernetes.md
@@ -0,0 +1,30 @@
+# Kubernetes
+
+A generic CORD installation can run on any version of Kubernetes (>=1.9) and Helm.
+
+Internet is full of different releases of Kubernetes, as of resources that can help to get you going.
+If on one side it may sound confusing, the good news is that you’re not alone!
+
+Pointing you to a specific version of Kubernetes wouldn’t probably make much sense, since each Kubernetes version may be specific to different deployment needs.
+Anyway, we think it’s good to point you to some well known releases, that can be used for different types of deployments.
+
+Following paragraphs provide guidelines, pointers and automated scripts to let you quickly install both Kubernetes and Helm.
+
+Whatever version of Kubernetes you’ve installed, a client tool called “kubectl” is usually needed to interact with your Kubernetes installation. To install kubectl on your development machine, follow this guide: <https://kubernetes.io/docs/tasks/tools/install-kubectl/>.
+
+Once Kubernetes is installed, you should have a KubeConfig configuration file containing all details of your deployment (address of the machine(s), credentials, ...). The file can be used to access your Kubernetes deployment from either Kubectl or Helm. Here is how:
+
+export KUBECONFIG=/path/to/your/kubeconfig/file
+
+You can also permanently export this environment variable, so you don’t have to export it each time you open a new window in your terminal. More info on this topic can be found here: <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/>.
+
+To test if the installation and the configuration steps were successful, type:
+
+```shell
+kubectl get pods
+```
+
+Kubernetes should reply to your request, and don’t output any error.
+
+More on Kubernetes and Kubectl commands can be found on the official Kubernetes website, <https://kubernetes.io/docs/tutorials/>.
+
diff --git a/prereqs/kubespray.md b/prereqs/kubespray.md
index 3ad8325..de79c4c 100644
--- a/prereqs/kubespray.md
+++ b/prereqs/kubespray.md
@@ -1,6 +1,112 @@
-# Install Kubespray on a multiple nodes
+# Multi-node Kubernetes
 
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Impedit tempora
-veniam laborum deleniti aperiam similique voluptatum architecto, rerum. Quae
-neque, quaerat. Voluptate voluptates, sunt obcaecati perferendis minima itaque
-adipisci quisquam.
+Usually multi-node Kubernetes installation are suggested for production and larger trials.
+
+## Kubernetes, multi-node well-known releases
+
+* Kubespray
+    * Documentation: <https://github.com/kubernetes-incubator/kubespray>
+    * What is used for: usually, for production deployments
+    * Minimum requirements:
+        * At least three machines (more info on hardware requirements on the Kubernetes website)
+
+## Kubespray lab-trial installation scripts
+
+For your convenience CORD provides some easy to use automated scripts to quickly install a lab environment in few commands.
+The goal of this script is to install Kubespray on a set of (minimum 3) target machines.
+
+At the end of the procedure, Kubespray should be installed.
+
+### Requirements
+
+* At least 4 machines: operator machine (i.e. laptop) + at least 3 target servers
+* Operator machine
+    * Has Git installed
+    * Has Python3 installed (<https://www.python.org/downloads/>)
+    * Has Stable version of Ansible installed (<http://docs.ansible.com/ansible/latest/intro_installation.html>)
+    * Is able to reach the target servers (ssh into them)
+* Target servers
+    * Have Ubuntu 16.04 installed 
+    * Are able to communicate together (ping one each other)
+    * They have the same user *cord* configured, that you can use to remotely access them from the operator machine
+    * The user *cord* is sudoer on each machine, and it doesn't need a password to get sudoer access
+
+### More on the Kubespray installation scripts
+
+All scripts are in the kubespray-installer folder just downloaded. From now on the guide assumes you’re running commands from this folder.
+
+The main script (*setup.sh*) provides an helper with instructions. Just run *./setup.sh --help* to see it.
+
+The two main functions are:
+
+* Install Kubespray on an arbitrary number of target machines
+* Export the k8s configuration file path as environment variable to let the user access a specific deployment
+
+### Get the Kubespray installation scripts
+
+On the operator machine
+```shell
+git clone https://gerrit.opencord.org/automation-tools
+```
+
+Inside, you will find a folder called *kubespray-installer*
+
+### Install Kubespray
+
+In the following example we assume that
+
+* Remote machines have the following IP addresses:
+    * 10.90.0.101
+    * 10.90.0.102
+    * 10.90.0.103
+
+* The deployment/POD has been given an arbitrary name: onf
+
+The installation procedure goes through the following steps (right in this order):
+
+* Cleans up any old Kubespray folder previously downloaded
+* Downloads a new, stable Kubespray installation repository
+* Copies the public key of the operator over to each target machine
+* Installs required software and configures the target machines as prescribed in the Kubespray guide
+* Deploys Kubespray
+* Downloads and exports the access configuration outside the Kubespray folder, so it won’t be removed at the next execution of the script (for example while trying to re-deploy the POD, or while deploying a different POD)
+
+To run the installation script, type
+```shell
+./setup.sh -i onf 10.90.0.101 10.90.0.102 10.90.0.103
+```
+
+**NOTE:** the first time you use the script, you will be promped to insert your password multiple times.
+
+At the end of the procedure, Kubespray should be installed and running on the remote machines.
+
+The configuration file to access the POD will be saved in the subfolder *configs/onf.conf*.
+
+Want to deploy another POD without affecting your existing deployment?
+
+Runt the following:
+```shell
+./setup.sh -i my_other_deployment 192.168.0.1 192.168.0.2 192.168.0.3
+```
+
+Your *onf.conf* configuration will be always there, and your new *my_other_deployment.conf* file as well!
+
+### Access the Kubespray deployment
+
+Kubectl and helm need to be pointed to a specific cluster, before being used.
+
+The script helps you also to automatically export the path pointing to an existing Kubespray configuration, previously generated during the installation.
+
+For example, if you want to run *kubectl get nodes* against the *onf* cluster just deployed, you should run:
+
+```shell
+source setup.sh -s onf
+```
+
+This will automatically run for you
+
+```shell
+export FULL_PATH/kubespray-installer/configs/onf.conf
+```
+
+As a result, you’ll now be able to successfully run *kubectl get nodes*.
diff --git a/prereqs/minikube.md b/prereqs/minikube.md
index 1dcfd2c..ed7ee20 100644
--- a/prereqs/minikube.md
+++ b/prereqs/minikube.md
@@ -1,5 +1,8 @@
 # Install Minikube on a single node
 
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Assumenda unde
-repudiandae quaerat doloribus dicta facilis, ipsam molestias, fugiat ducimus
-voluptatum, nostrum impedit iure enim minus vel consectetur labore modi, est.
+* **Documentation**: <https://kubernetes.io/docs/getting-started-guides/minikube/>
+
+* **What is used for**: usually, adopted in development environments. It can be installed even locally on the own machine
+
+* **Minimum requirements**:
+    * One machine, either a physical machine or a VM. It could also be your own PC!
diff --git a/prereqs/networking.md b/prereqs/networking.md
index 7062ed1..bde9f69 100644
--- a/prereqs/networking.md
+++ b/prereqs/networking.md
@@ -1,3 +1,23 @@
-# How to cable a POD
+# Network Connectivity
 
-NOTE: how do we define this for a single/virtual setup? Do we need to do that now?
\ No newline at end of file
+Network requirments are very easy. There are two networks: a management network for operators' management, and (in some use-cases) a dataplane network for end-users' traffic.
+
+## Management network
+
+It's the network that connects all physical devices (compute machines, fabric switches, access devices, development machine...) together, allowing them to talk one each other, and allowing operators to manage CORD.
+The network is usually a 1G copper network, but this may vary deployment by deployment.
+Network devices (access devices and fabric switches) usually connect to this network through a dedicated management 1G port.
+If everything is setup correctly, any device should be able to communicate with the others at L3 (basically devices should ping one each other).
+This network is usually used to access Internet for the underlay infrastructure setup (CORD doesn't necessarilly need Internet access). For example, you'll likely need to have Internet access through this network to install your OS or updates of it, switch software, Kubernetes.
+
+Below you can see a diagram of a typical management network.
+
+![CORD management network](../images/mgmt_net.png)
+
+## Dataplane network
+
+This is the network that carries the users' traffic. Depending on the requirements it may vary and go from 1G to any speed. This is completely separate from the management network. Usually this network has access to Internet to allow subscribers to go to Internet.
+
+An example diagram including the dataplane network is shown below.
+
+![CORD management network](../images/data_net.png)
\ No newline at end of file
diff --git a/prereqs/openstack-helm.md b/prereqs/openstack-helm.md
index fe504bf..f87b125 100644
--- a/prereqs/openstack-helm.md
+++ b/prereqs/openstack-helm.md
@@ -1,6 +1 @@
-# OpenStack Helm Installation
-
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Quasi corporis
-officia dolorum fugit eligendi obcaecati earum, quam reprehenderit optio
-consectetur quaerat voluptates asperiores aut vel laudantium soluta laboriosam
-iure culpa.
+# OpenStack helm
diff --git a/prereqs/software.md b/prereqs/software.md
new file mode 100644
index 0000000..f847e2a
--- /dev/null
+++ b/prereqs/software.md
@@ -0,0 +1,9 @@
+# Software requirements
+
+CORD is distributed as a set of containers that can potentially run on any Kubernetes environment.
+
+As such, you can choose what operating system to use, how to configure it, and how to install Kubernetes on it.
+
+**M-CORD is the exception**, since part of its components still run on OpenStack. OpenStack is deployed as a set of Kubernetes containers. Anyway these containers require a special version of Kubernetes and additional configurations. You can find more informations about this in the M-CORD installation sections.
+
+Following sections describe what specifically CORD containers require and some pointers to DEMO automated-installation scripts.
diff --git a/terminology.md b/terminology.md
new file mode 100644
index 0000000..2569c31
--- /dev/null
+++ b/terminology.md
@@ -0,0 +1,37 @@
+# Terminology
+
+This guide uses the following terminology.
+
+**CORD POD**: A single physical deployment of CORD.
+
+**Development (Dev) machine**: This is the machine used to download, build and deploy CORD onto a POD.
+Sometimes it is a dedicated server, and sometimes the developer's laptop. In
+principle, it can be any machine that satisfies the hardware and software
+requirements.
+
+**Development (Dev) VM**: Bootstrapping the CORD installation requires a lot of software to be installed and some non-trivial configurations to be applied.  All this should happen on the dev machine.  To help users with the process, CORD provides an easy way to create a VM on the dev machine with all the required software and configurations in place.
+
+**Compute Node(s)**
+:  A server in a POD that run VMs or containers associated with one or more
+   tenant services. This terminology is borrowed from OpenStack.
+
+**Head Node**
+:  A compute node of the POD that also runs management services. This includes
+   for example XOS (the orchestrator), two instances of ONOS (the SDN controller,
+   one to control the underlay fabric and one to control the overlay), MAAS and
+   all the services needed to automatically install and configure the rest of
+   the POD devices.
+
+**Fabric Switch**
+:  A switch in a POD that interconnects other switches and servers inside the
+   POD.
+
+**vSG**
+:  The virtual Subscriber Gateway (vSG) is the CORD counterpart for existing
+   CPEs. It implements a bundle of subscriber-selected functions, such as
+   Restricted Access, Parental Control, Bandwidth Metering, Access Diagnostics and
+   Firewall. These functions run on commodity hardware located in the Central
+   Office rather than on the customer’s premises. There is still a device in the
+   home (which we still refer to as the CPE), but it has been reduced to a
+   bare-metal switch.
+