Merge "Troubleshooting page for failing dhcp"
diff --git a/README.md b/README.md
index 8f18a6e..86214fa 100644
--- a/README.md
+++ b/README.md
@@ -17,9 +17,10 @@
 consider running a complete system entirely emulated in software using
 [SEBA-in-a-Box](./profiles/seba/siab-overview.md).
 
-If you are anxious to jump straight to a [Quick Start](quickstart.md)
-procedure that brings up a subset of the CORD platform running
-on your laptop (without a subscriber data plane), that too is an option.
+If you prefer a gentle walk through of process bringing up a subset 
+of the CORD platform running on your lapto (e.g., to get an
+introduction to all the moving parts in CORD) then jumping to the
+[Quick Start](quickstart.md) page is also an option.
 
 Finally, if you want to get a broader lay-of-the-land, you
-might step back and start with an [Overview](overview.md).
+might step back and read the [Overview](overview.md).
diff --git a/SUMMARY.md b/SUMMARY.md
index 7c5e40d..ee787a8 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -17,13 +17,11 @@
                 * [Single Node](prereqs/k8s-single-node.md)
                 * [Multi-Node](prereqs/k8s-multi-node.md)
             * [Helm](prereqs/helm.md)
-            * [Optional Packages](prereqs/optional.md)
-                * [OpenStack](prereqs/openstack-helm.md)
     * [Fabric Software Setup](fabric-setup.md)
     * [Install Platform](platform.md)
     * [Install Profile](profiles.md)
     * [Offline Install](offline-install.md)
-    * [Attach containers to external NICs](operating_cord/veth_intf.md)
+    * [Attach Container to a NIC](operating_cord/veth_intf.md)
 * [Operations Guide](operating_cord/operating_cord.md)
     * [General Info](operating_cord/general.md)
         * [GUI](operating_cord/gui.md)
@@ -40,11 +38,7 @@
     * [Getting the Source Code](developer/getting_the_code.md)
     * [Modeling Services](developer/xos-intro.md)
     * [Developer Workflows](developer/workflows.md)
-        * [Working on R-CORD Without an OLT/ONU](developer/configuration_rcord.md)
     * [Building Docker Images](developer/imagebuilder.md)
-    * [Example Services](examples/examples.md)
-        * [SimpleExampleService](simpleexampleservice/simple-example-service.md)
-        * [ExampleService](exampleservice/example-service.md)
     * [GUI Development](xos-gui/developer/README.md)
         * [Quickstart](xos-gui/developer/quickstart.md)
         * [GUI Extensions](xos-gui/developer/gui_extensions.md)
@@ -114,6 +108,7 @@
     * [Kafka](charts/kafka.md)
     * [Base Kubernetes](charts/base-kubernetes.md)
     * [Base OpenStack](charts/base-openstack.md)
+        * [OpenStack](prereqs/openstack-helm.md)
         * [VTN Setup](prereqs/vtn-setup.md)
     * [R-CORD](charts/rcord.md)
     * [M-CORD](charts/mcord.md)
diff --git a/developer/workflows.md b/developer/workflows.md
index f57f946..99b39f1 100644
--- a/developer/workflows.md
+++ b/developer/workflows.md
@@ -23,26 +23,19 @@
 
 Once both Helm and Minikube are installed, you can deploy the
 core components of XOS, along with the services that make
-up, for example, the R-CORD profile. This uses images published
+up, for example, the SEBA profile. This uses images published
 on DockerHub:
 
 ```shell
 cd ~/cord/helm-charts
 ```
 
-In this folder you can choose from the different charts which one to deploy.
-For example to deploy R-CORD you can follow [this guide](../profiles/rcord/install.md)
-
-### Deploy a Single Instance of Kafka
-
-Some profiles require a `kafka` message bus to work properly.
-If you need to deploy it for development purposes, a single instance
-deployment will be enough. You can do so as follows:
-
-```shell
-helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
-install --name cord-kafka incubator/kafka -f examples/kafka-single.yaml
-```
+In this folder you can choose from the different charts which one to
+deploy. For example to deploy SEBA you can follow
+[these instructions](../profiles/seba/install.md). Alternatively, if
+you are working on a new profile or a new service that is not part of
+any existing profile, you can install just the
+[CORD Platform](../platform.md).
 
 ## Making and Deploying Changes
 
@@ -93,14 +86,14 @@
 > the containers, you can just delete the corresponding POD and kubernetes will
 > restart it with the new image.
 
-## Pushing Changes to a docker registry
+## Pushing Changes to a Docker Registry
 
 If you have a remote POD that you want to test your changes on, you
 need to push your docker images to a docker registry that can be accessed
 from the POD.
 
 The way we recommend doing this is via a private docker registry.
-You can find more informations about what a
-docker registry is in the [offline installation section](../offline-install.md).
+You can find more information about what a docker registry is in the
+[offline installation section](../offline-install.md).
 
 {% include "/partials/push-images-to-registry.md" %}
diff --git a/offline-install.md b/offline-install.md
index 304c6fb..3ead38b 100644
--- a/offline-install.md
+++ b/offline-install.md
@@ -1,8 +1,10 @@
 # Offline Install
 
-Often, CORD PODs' (management networks) don't have access to Internet.
+In many cases, a CORD POD (specifically, its management network) does
+not have access to Internet.
 
-This section of the guide provides guidelines, best-practices and examples to deploy the CORD/SEBA software without Internet connectivity.
+This section provides guidelines, best-practices, and examples to
+deploy CORD software without Internet connectivity.
 
 > NOTE: The guide assumes that the Operating Systems (both on servers and on network devices) and Kubernetes are already installed and running.
 
@@ -10,19 +12,30 @@
 
 * When the CORD POD has no access to Internet, so artifacts used for the installation (i.e. Docker images) cannot be downloaded directly to the POD.
 
-* While developing, you may want to test your changes pushing artifacts to the POD, before uploading them to the official docker repository.
+* While developing, you may want to test your changes by pushing artifacts to the POD, before uploading them to the official docker repository.
 
-## Target Infrastructure, requirements overview
+## Target Infrastructure / Requirements Overview
 
 Your target infrastructure (where the POD runs) needs
 
-* A **local Docker Registry** where to push your Docker images (previously pulled from the web). If you don't have one, follow the notes below to deploy with helm a local Docker registry on top of your existing Kubernetes cluster.
+* **Local Docker Registry:** To push your Docker images
+  (previously pulled from the web). If you don't have one, follow the
+  notes below to use Helm to deploy a local Docker registry on top
+  of your existing Kubernetes cluster.
 
 > More informations about docker registries can be found at <https://docs.docker.com/registry/>.
 
-* A **local webserver** to host the ONOS applications (.oar files), that are instead normally downloaded from Sonatype. If you don't have one, follow the notes below to quickly deploy a webserver on top of your existing Kubernetes cluster.
+* **Local Webserver:** To host the ONOS applications (.oar files),
+  that are instead normally downloaded from Sonatype. If you don't
+  have one, follow the notes below to quickly deploy a webserver on
+  top of your existing Kubernetes cluster.
+  
+* **Kubernetes Servers Default Gateway:** For `kube-dns` to
+  work, a default route (even pointing to a non-exisiting/working
+  gateway) needs to be set on the machines hosting Kubernetes. This is
+  something related to Kubernetes, not to CORD.
 
-## Prepare the offline installation
+## Prepare the Offline Installation
 
 This should be done from a machine that has access to Internet.
 
@@ -63,37 +76,55 @@
 docker save IMAGE_NAME:TAG > FILE_NAME.tar
 ```
 
-* If the artifacts need to be deployed to the target infrastructure from a different machine, save the helm-charts directory, the ONOS applications, the docker images downloaded and the additional helm charts variable extension file.
+* If the artifacts need to be deployed to the target infrastructure
+  from a different machine, save the helm-charts directory, the ONOS
+  applications, the docker images downloaded and the additional helm
+  charts variable extension file.
 
-## Deploy the artifacts to your infrastructure and install CORD/SEBA
+## Deploy the Artifacts to Your Infrastructure
 
-This should not require any Internet connectivity. To deploy the artifacts to your POD, do the following from  machine that has access to your Kubernetes cluster:
+This should not require any Internet connectivity. To deploy the
+artifacts to your POD, do the following from  machine that has access
+to your Kubernetes cluster:
 
-* Optionally, if at the previous step you saved the Docker images on an external hard drive as .tar files, restore them in the deployment machine Docker registry. For each image (file), use the docker command
+* Optionally, if at the previous step you saved the Docker images on
+  an external hard drive as .tar files, restore them in the deployment
+  machine Docker registry. For each image (file), use the Docker
+  command:
 
 ```shell
 docker load < FILE_NAME.tar
 ```
 
-* Tag and push your Docker images to the local Docker registry running in your infrastructure. More info on this can be found in the paragraph below.
+* Tag and push your Docker images to the local Docker registry running
+  in your infrastructure. More info on this can be found in the
+  paragraph below.
 
-* Copy the ONOS applications to your local web server. The procedure largely varies from the web server you run, its configuration, and what ONOS applications you need.
+* Copy the ONOS applications to your local web server. The procedure
+  largely varies from the web server you run, its configuration, and
+  what ONOS applications you need.
 
-* Deploy CORD and its profile(s) using the helm charts. Remember to load with the *-f* option the additional configuration file to override the helm charts, if any.
+* Deploy CORD using the helm charts. Remember to load with the
+  *-f* option the additional configuration file to extend the helm
+  charts, if any.
 
 {% include "/partials/push-images-to-registry.md" %}
 
-## Optional packages
+## Optional Packages
 
-### Install a Docker Registry using helm
+### Install a Docker Registry Using Helm
 
-If you don't have a local Docker registry deployed in your infrastructure, you can install an **insecure** one using the official Kubernetes helm-chart.
+If you don't have a local Docker registry deployed in your
+infrastructure, you can install an **insecure** one using the official
+Kubernetes helm-chart.
 
 Since this specific docker registry is packaged as a kubernetes pod, shipped with helm, you'll need Internet connectivity to install it.
 
 > **Note:** *Insecure* registries can be used for development, POCs or lab trials. **You should not use this in production.** There are planty of documents online that guide you through secure registries setup.
 
-The following command deploys the registry and exposes the port *30500*. (You may want to change it with any value that fit your deployment needs).
+The following command deploys the registry and exposes the port
+*30500*. (You may want to change it with any value that fit your
+deployment needs).
 
 ```shell
 helm install stable/docker-registry --set service.nodePort=30500,service.type=NodePort -n docker-registry
@@ -105,9 +136,11 @@
 curl -X GET http://KUBERNETES_IP:30500/v2/_catalog
 ```
 
-### Install a local web server using helm (optional)
+### Install a Local Webserver Using Helm (optional)
 
-If you don't have a local web server that can be accessed from the POD, you can easily install one on top of your existing Kubernetes cluster.
+If you don't have a local web server that can be accessed from the
+POD, you can easily install one on top of your existing Kubernetes
+cluster.
 
 ```shell
 # From the helm-charts directory, while preparing the offline install
@@ -118,21 +151,27 @@
 helm install -n mavenrepo --set service.type=NodePort --set service.nodePorts.http=30160 bitnami/nginx
 ```
 
-The webserver will be up in few seconds and you'll be able to reach the root web page using the IP of one of your Kubernetes nodes, port *30160*. For example, you can do:
+The webserver will be up in few seconds and you'll be able to reach
+the root web page using the IP of one of your Kubernetes nodes, port
+*30160*. For example, you can do:
 
 ```shell
 wget KUBERNETES_IP:30160
 ```
 
-OAR images can be copied to the document root of the web server using the *kubectl cp* command. For example:
+OAR images can be copied to the document root of the web server using
+the *kubectl cp* command. For example:
 
 ```shell
 kubectl cp my-onos-app.oar `kubectl get pods | grep mavenrepo | awk '{print $1;}'`:/opt/bitnami/nginx/html
 ```
 
-## Example: offline SEBA install
+## Example: Offline SEBA Install
 
-The following section provides an exemplary list of commands to perform an offline SEBA POD installation. Please, note that some command details (i.e. chart names, image names, tools) may have changed over time.
+The following section provides an exemplary list of commands to
+perform an offline SEBA POD installation. Please, note that some
+command details (i.e. chart names, image names, tools) may have
+changed over time.
 
 ### Assumptions
 
@@ -152,7 +191,7 @@
 
 * The IP address of the machine hosting Kubernetes is 192.168.0.100.
 
-### Prepare the installation
+### Prepare the Installation
 
 ```shell
 # Clone the automation-tools repo
@@ -281,7 +320,7 @@
 scp/wget... openolt.deb
 ```
 
-### Offline deployment
+### Offline Deployment
 
 ```shell
 # Tag and push the images to the local Docker registry
diff --git a/operating_cord/veth_intf.md b/operating_cord/veth_intf.md
index 4ccf526..4bbe2cb 100644
--- a/operating_cord/veth_intf.md
+++ b/operating_cord/veth_intf.md
@@ -1,18 +1,27 @@
-# Manually connect containers to a network card
+# Attach Container to a NIC
 
-Sometimes you may need to attach some containers NICs to the network cards of the machines hosting them, for example to run some data plane traffic through them.
+Sometimes you may need to attach some container's virtual network
+interface to the NIC of the machine hosting it, for example, to run
+some data plane traffic through the container.
 
-Although CORD doesn't fully support this natively there are some (hackish) ways to do this manually.
+Although CORD doesn't fully support this natively, there are some
+(hackish) ways to do this manually.
 
 ## Create a bridge and a veth
 
-The easiest way to do this is to skip Kubernetes and directly attach the Docker container link it to the host network interface, through a Virtual Ethernet Interface Pair (veth pair).
+The easiest way to do this is to skip Kubernetes and directly attach
+the Docker container link it to the host network interface through a
+Virtual Ethernet Interface Pair (veth pair).
 
 Let's see how.
 
-For completeness, let's assume you're running a three nodes Kubernetes deployment, and that you're trying to attach a container *already deployed* called *vcore-5b4c5478f-lxrpb* to a physical interface *eth1* (already existing on one of the three hosts, running your container). The virtual interface inside the container will be called *eth2*.
+Let's assume you're running a three node Kubernetes deployment, and
+that you're trying to attach a container *already deployed* called
+*vcore-5b4c5478f-lxrpb* to a physical interface *eth1* (already
+existing on one of the three hosts, running your container). The
+virtual interface inside the container will be called *eth2*.
 
-You got the name of the container running
+You get the name of the container by running:
 
 ```shell
 $ kubectl get pods [-n NAMESPACE]
@@ -20,16 +29,20 @@
 vcore-5b4c5478f-lxrpb     1/1       Running   1          7d
 ```
 
-Find out on which of the three nodes the container has been deployed
+To find out on which of the three nodes the container has been
+deployed on, type:
 
 ```shell
 $ kubectl describe pod  vcore-5b4c5478f-lxrpb | grep Node
 Node:           node3/10.90.0.103
 Node-Selectors:  <none>
 ```
-As you can see from the first line, the container has been deployed by Kubernetes on the Docker daemon running on node 3 (this is just an example). In this case, with IP *10.90.0.103*.
 
-Let's SSH into the node and let's look for the specific Docker container ID
+As you can see from the first line, the container has been deployed by
+Kubernetes on the Docker daemon running on node 3 (this is just an
+example), in this case, with IP *10.90.0.103*.
+
+Next, SSH into the node and look for the specific Docker container ID:
 
 ```shell
 $ container_id=$(sudo docker ps | grep vcore-5b4c5478f-lxrpb | head -n 1 | awk '{print $1}')
@@ -42,31 +55,32 @@
 sudo ip link set eth1 down
 ```
 
-Create a veth called *veth0* and let's add to it the new virtual interface *eth2*
+Create a veth called *veth0* and add to it the new virtual interface *eth2*:
 
 ```shell
 sudo ip link add veth0 type veth peer name eth2
 ```
 
-Add the virtual network interface *eth2* to the container namespace
+Now add the virtual network interface *eth2* to the container namespace:
 
 ```shell
 sudo ip link set eth2 netns ${container_id}
 ```
 
-Bring up the virtual interface
+Bring up the virtual interface:
 
 ```shell
 sudo ip netns exec ${container_id} ip link set eth2 up
 ```
 
-Bring up *veth0*
+Bring up *veth0*:
 
 ```shell
 sudo ip link set veth0 up
 ```
 
-Create a bridge named *br1*. Add *veth0* to it and the host interface *eth1*
+Create a bridge named *br1*, and add *veth0* to it and the host
+interface *eth1*:
 
 ```shell
 sudo ip link add br1 type bridge
@@ -75,14 +89,15 @@
 
 ```
 
-Bring up again the host interface and the bridge
+Bring up again the host interface and the bridge:
 
 ```shell
 sudo ip link set eth1 up
 sudo ip link set br1 up
 ```
 
-At this point, you should see an additional interface *eth2* inside the container
+At this point, you should see an additional interface *eth2* inside
+the container:
 
 ```shell
 $ kubectl exec -it vcore-5b4c5478f-lxrpb /bin/bash
@@ -96,9 +111,10 @@
     link/ether d6:84:33:2f:8c:92 brd ff:ff:ff:ff:ff:ff
 ```
 
-## Cleanup (remove veth and bridge)
+## Cleanup (remove bridge and veth)
 
-As a follow up of the previous example, let's now try to delete what has been created so far, to bring the system back to the original state.
+To delete the connection and bring the system back to the original
+state, execute the following:
 
 ```shell
 ip link set veth0 down
diff --git a/platform.md b/platform.md
index f66fc1b..d14697c 100644
--- a/platform.md
+++ b/platform.md
@@ -3,7 +3,7 @@
 Once the prerequisites have been met, the next step to installing CORD is
 to bring up the Helm charts for the platform components. 
 
-## CORD Platform as a whole
+## CORD Platform as a Whole
 
 To install the CORD Platform you can use the corresponding chart:
 
@@ -11,14 +11,14 @@
 helm install -n cord-platform cord/cord-platform --version=6.1.0
 ```
 
-## CORD Platform as separate components
+## CORD Platform as Separate Components
 
-The main reason to install the CORD Platform by installing its standalone components
-is if you're developing on it and you need granular control.
-
-There are the components included in the `cord-platform` chart:
+Sometimes it his helpful (for example, when developing) to install the
+individual components that make up the CORD Platform one at a time.
+The following are the individual components included in the
+`cord-platform` chart:
 
 - [ONOS](./charts/onos.md#onos-manages-fabric--voltha)
-- [xos-core](./charts/xos-core.md)
-- [cord-kafka](./charts/kafka.md)
-- [logging-monitoring](./charts/logging-monitoring.md)
+- [XOS](./charts/xos-core.md)
+- [Kafka](./charts/kafka.md)
+- [Logging-Monitoring](./charts/logging-monitoring.md)
diff --git a/prereqs/hardware.md b/prereqs/hardware.md
index a790698..deafaaf 100644
--- a/prereqs/hardware.md
+++ b/prereqs/hardware.md
@@ -29,7 +29,7 @@
   emulate the fabric in software (e.g., using Mininet), but this applies
   only to specific use-cases.
 
-* **Access Devices**: At the moment, both R-CORD and M-CORD work
+* **Access Devices**: At the moment, SEBA and M-CORD work
   with very specific access devices, as described below. We strongly
   recommend using these tested devices.
 
@@ -72,7 +72,7 @@
         * Robofiber QSFP-40G-03C QSFP+ 40G direct attach passive
         copper cable, 3m length - S/N: QSFP-40G-03C
 
-* **R-CORD Access Devices and Optics**
+* **SEBA Access Devices and Optics**
     * **GPON**
         * **OLT**: Celestica CLS Ruby S1010 (experimental, only top-down provisioning is supported - through manual customizations)
             * Compatible **OLT optics**
diff --git a/prereqs/helm.md b/prereqs/helm.md
index b3a6cd0..c75080f 100644
--- a/prereqs/helm.md
+++ b/prereqs/helm.md
@@ -42,5 +42,6 @@
 ## Next Step
 
 Once you are done, you are ready to deploy CORD components using their
-helm charts! See [Bringing Up CORD](../profiles/intro.md). For more detailed
-information, see the [helm chart reference guide](../charts/helm.md).
+helm charts! Start by brining up the
+[CORD Platform](../platform.md). For more detailed information, see
+the [Helm Reference](../charts/helm.md).
diff --git a/prereqs/openstack-helm.md b/prereqs/openstack-helm.md
index aefa6f1..b3941c7 100644
--- a/prereqs/openstack-helm.md
+++ b/prereqs/openstack-helm.md
@@ -4,7 +4,8 @@
 project can be used to install a set of Kubernetes nodes as OpenStack
 compute nodes, with the OpenStack control services (nova, neutron,
 keystone, glance, etc.) running as containers on Kubernetes. This is
-necessary, for example, to run the M-CORD profile.
+an easy way to bring up an OpenStack cluster that can be controlled
+by loading the [Base OpenStack](../charts/base-openstack.md) chart.
 
 Instructions for installing `openstack-helm` on a single node or a
 multi-node cluster can be found at
diff --git a/prereqs/optional.md b/prereqs/optional.md
deleted file mode 100644
index e19dbe3..0000000
--- a/prereqs/optional.md
+++ /dev/null
@@ -1,7 +0,0 @@
-# Optional Packages
-
-Although not required, you may want to install the following packages:
-
-* **OpenStack:** If you need to include OpenStack in your deployment,
-  so you can bring up VMs on your POD, you will need to following the
-  [OpenStack deployment](openstack-helm.md) guide.
diff --git a/prereqs/software.md b/prereqs/software.md
index 03b0a07..971a296 100644
--- a/prereqs/software.md
+++ b/prereqs/software.md
@@ -4,11 +4,3 @@
 pretty much any Kubernetes environment. It is your choice how
 to install Kubernetes, although this section describes automation
 scripts we have found useful.
-
-> **Note:** M-CORD is the exception since its components still depend on
-> OpenStack, which is in turn deployed as a set of Kubernetes containers
-> using the [openstack-helm](https://github.com/openstack/openstack-helm)
-> project. Successfully installing the OpenStack Helm charts requires
-> some additional system configuration besides just installing Kubernetes
-> and Helm. You can find more informations about this in the
-> [OpenStack Support](./openstack-helm.md) installation section.
diff --git a/profiles/seba/install.md b/profiles/seba/install.md
index 43176f8..f32b61f 100644
--- a/profiles/seba/install.md
+++ b/profiles/seba/install.md
@@ -8,9 +8,16 @@
 
 In order to run SEBA you need to have the [CORD Platform](../../platform.md) installed.
 
-### SEBA as a whole
+Specifically, wait for the EtcdCluster CustomResourceDefinitions to
+appear in Kubernetes:
 
-To install the SEBA Profile you can use the corresponding chart:
+```shell
+kubectl get crd | grep etcd
+```
+
+Once the CRDs are present, proceed with the `seba` chart installation.
+
+### SEBA as a whole
 
 ```shell
 helm install -n seba cord/seba --version=1.0.0